#GKE

2025-06-09

See how Google #Kubernetes Engine (#GKE) and AI come together to power real-world applications — faster, smarter, and more securely. Bobby Allen (Google Cloud Therapist) discusses why are enterprises turning to GKE for AI workloads with The New Stack's Frederic Lardinois at #GoogleCloudNext.

🅱🅸🅶🅾🆁🆁🅴.🅾🆁🅶bigorre_org
2025-05-25

Aviation weather for Geilenkirchen Air Base airport (Germany) is “ETNG 251750Z AUTO 25018KT 9999 // FEW041/// BKN250/// BKN320/// 19/10 Q1011” : See what it means on bigorre.org/aero/meteo/etng/en vl

🅱🅸🅶🅾🆁🆁🅴.🅾🆁🅶bigorre_org
2025-05-14

Beautiful weather for takeoff from Geilenkirchen Air Base airport (Germany) “ETNG 141350Z 33010KT CAVOK 23/06 Q1017 NOSIG” : See what it means on bigorre.org/aero/meteo/etng/en vl

Daryl Joseph Ducharmespoilerdiacre@craftodon.social
2025-05-09

Kubernetes 1.33 is available on GKE!🎉

Huge thanks to the amazing team at Google who contributed to the Kubernetes 1.33 release on GKE! 🙌From reliable upgrades to enhancing API server performance, their hard work is driving innovation in the cloud.

Read about their contributions here: opensource.googleblog.com/2025
#GKE #Kubernetes #OpenSource

Kubernetes 1.33 logo
2025-05-08

GKE — VPC-Native, Pods и VPC Firewall: Маркетинг против реальности

Привет, Хабр! Google, без преувеличения, изменил мир IT, подарив нам Kubernetes – систему, ставшую де-факто стандартом оркестрации контейнеров. И когда выбираешь управляемый Kubernetes от его же создателей, такой как Google Kubernetes Engine (GKE), ожидания, естественно, высоки. Уж кто-кто, а "первоисточник" должен уметь "готовить" свое детище идеально, предоставляя не только удобство, но и прозрачные, глубоко интегрированные и безопасные решения "из коробки". Особенно когда речь заходит о такой фундаментальной вещи, как сетевое взаимодействие и его безопасность. GKE предлагает два режима работы кластеров: routes-based и VPC-native. Именно VPC-native кластеры позиционируются Google как обеспечивающие более тесную интеграцию с сетью VPC. Как утверждает Google, одно из преимуществ таких кластеров заключается в том, что IP-адреса подов (pods) нативно маршрутизируемы внутри сети VPC кластера и других сетей VPC, подключенных к ней через VPC Network Peering (подробнее см. документацию GKE по IP-алиасам и VPC-native кластерам ). Это вселяет уверенность, что возможности VPC, включая мощный механизм GCP Firewall, будут доступны и для наших подов так же легко и нативно, как для обычных виртуальных машин. Однако, погружаясь в детали настройки контроля сетевого доступа для подов к ресурсам внутри VPC, но внешним по отношению к самому Kubernetes (например, к базам данных Cloud SQL или другим бэкендам), начинаешь сталкиваться с нюансами. Нюансами, которые заставляют усомниться в "бесшовности" этой интеграции. Эта статья – не попытка принизить достижения Google или GKE. Скорее, это повод для всех нас, инженеров, задуматься о тех важных деталях реализации, которые часто остаются "под капотом". Повод погрузиться глубже, понять, как все устроено на самом деле, и какие компромиссы или сложности скрываются за маркетинговыми лозунгами. Ведь чем сложнее архитектура безопасности, тем выше вероятность ошибки конфигурации, особенно если ее компоненты и их взаимодействие не до конца понятны. Если даже у такого гиганта, как Google, в его флагманском продукте для Kubernetes есть подобные неочевидные моменты, то нам, инженерам, работающим с этими системами ежедневно, тем более важно понимать все тонкости для обеспечения надежности и безопасности наших собственных окружений.

habr.com/ru/articles/907774/

#kubernetes #gke #firewall #network_security

Scott Williams 🐧vwbusguy@mastodon.online
2025-05-07

I just killed our last #JupyterHub #GKE Project in my $DAYJOB. Our #Jupyter compute environments are now all fully on prem.

#Python #DataScience #Kubernetes #Baremetal #RStats #RStudio

Nicolas Fränkel 🇺🇦🇬🇪frankel@mastodon.top
2025-05-04

I’m working on #Kubernetes these days. Recently, I wrote a series on how one could design a full-fledged testing pipeline targeting #GoogleKubernetesEngine. The second part mentions creating a #GKE instance in the context of a GitHub workflow. In this post, I want to assess #Crossplane by creating such an instance.

blog.frankel.ch/feet-wet-cross

2025-04-10

Kubernetes Storage Without the Pain: Simplyblock in 15 Minutes

Whether you're building a high-performance cloud-native app or running data-heavy workloads in your own infrastructure, persistent storage is necessary. In Kubernetes, this means having storage that survives pod restarts, failures, and rescheduling events—and that’s precisely what simplyblock brings to the table: blazing-fast, scalable, and software-defined storage with cloud economics. A hyper-converged storage solution, like simplyblock enables Kubernetes storage par excellence. In […]

simplyblock.io/blog/install-si

Simplyblock deployment model options: Disaggregated, Hyper-Converged, Hybrid
vsochvsoch
2025-04-07

Finally, here are some practical suggestions. If you use , bring up a small cluster first just to pull containers, which are cached across clusters. If you are using AWS, might help if you don't need to load large data during runtime. On any cloud a SSD is going to always help.

Meysammeysam81
2025-03-06

One of the most annoying parts of writing with cluster as opposed to and is that the authentication plugin for kubectl takes as long as 20-30s before I can get a response from the API server.

I can't imagine myself staying with for more than the upcoming year. 🏃🏼‍♂️

Nicolas Fränkel 🇺🇦🇬🇪frankel@mastodon.top
2025-02-23

This week’s post is the third and final in my series about running tests on #Kubernetes for each pull request. In the 1st post, I described the app and how to test locally using #Testcontainers and in a #GitHub workflow. The second post focused on setting up #GKE and running end-to-end tests on Kubernetes.

In this post, I’ll show how to benefit from the best of both worlds with #vCluster: a single cluster with testing from each PR in complete isolation from others.

blog.frankel.ch/pr-testing-kub

Nicolas Fränkel 🇺🇦🇬🇪frankel@mastodon.top
2025-02-16

I’m continuing my series on running the test suite for each PR on #Kubernetes. In the previous post, I laid the groundwork for our learning journey.

This week, I will raise the ante:

* Create and configure a #GoogleKubernetesEngine instance
* Create a Kubernetes manifest for the app, with #Kustomize for customization
* Allow the #GitHub workflow to use the #GKE instance
* Build the Docker image and store it in the GitHub Docker repo
* Finally, run the end-to-end test

blog.frankel.ch/pr-testing-kub

2025-02-14

HOLY MOLEY. I think I've done it. Given the amount of docs on this, I wouldn't be surprised to hear I'm basically the only person on the planet who's got this working!

The crucial bit I'd missed is that even though the GCE ingress terminates SSL, it will - for HTTP/2 only - re-encrypt the connection to your backend. Your backend service therefore needs to be able to talk SSL. Any old cert will do - I generated some self-signed ones.

The thing that took me a day to figure out is that a bad SSL handshake just looks like a network connection failure to the load balancer, so I spent ages doing network debugging.

What. A. Palaver.

#grpc #gke #gcp #tech

Kevin Gautreaukgaut@oisaur.com
2024-10-17

Résumé de ma journée à faire du debug #gcp #gke

Échange sur teams : premier message « ok punaise je crois que j'ai trouvé ». Second message juste après « ah non »
Gea-Suan Lingslin@abpe.org
2024-09-29

GKE 在推廣拿 240.0.0.0/4 來當 Private IP 用

看到「Leveraging Class E IPv4 Address space to mitigate IPv4 exhaustion issues in GKE」這篇,GKE 在推 240.0.0.0/4 當作 Private IP 用,可以看到文章裡面一直在說 240.0.0.0/4 跑起來沒有什麼問題,也可以透過 N

blog.gslin.org/archives/2024/0

#Cloud #Computer #GCP #Murmuring #Network #Service #address #cloud #engine #gke #google #ip #ipv4 #kubernetes #platform #private

Clemens Vasters' Pictures 📷clemensv@photog.social
2024-08-22
NATO Air Base Geilenkirchen Open Day 2017 #photography #avgeek #aviation #nato #wearenato #gke #etng #highlight (Flickr 01.07.2017) https://www.flickr.com/photos/7489441@N06/34912020153
2024-08-20

Any thoughts on why it takes under 5 minutes to provision a simple 2-node test #cluster on #AKS with #Terraform, but 25-30 mins to do the same on #EKS and #GKE? No hate, but it makes me want to avoid the latter two when I want to test something quickly.

#kubernetes #k8s

2024-08-15

I put together a #howto guide which explains how to create a #GKE cluster on Google Cloud using #terraform deployed using #githubactions

This might be of interest to someone, I wrote it so I don’t forget

#devops #tech #sysadmin #tech

daveknowstech.notion.site/How-

Johannes Schnattererschnatterer@floss.social
2024-08-05

⚠️ Heads up #velero users!

When you're relying on velero using #prometheus metric-based alerts make sure to not only alert on
`velero_backup_failure_total` but also on
`velero_backup_partial_failure_total` 🧐

After running velero reliably for years on #gke our backups suddenly started failing *partially*.

Turns out #GCP must have changed something in AuthN, requiring an additional role to perform disk snapshots.
As this resulted in partial failures only we almost missed it.

Screenshot of grafana showing the two metrics in a promql query window:

rate(velero_backup_partial_failure_total{schedule=~".*"}[5m])

rate(velero_backup_failure_total{schedule=~".*"}[5m])

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst