Who is migrating from #fluentd to #vector on #openshift 4?
Who is migrating from #fluentd to #vector on #openshift 4?
Which #logging system do you prefer for managing logs in #Kubernetes?
#k8s #log #LogManagement #logs #kibana #elastic #elasticsearch #opensearch #fluent #fluent2 #fluentbit #fluentd #logstash #kafka #grafana #loki #promtail #cncf
[小ネタ]EC2に導入したFluentdの設定ファイルを変更し時間階層でファイルを送信してみた
https://dev.classmethod.jp/articles/ec2-fluentd-configuration-time-hierarchy-file-transfer/
New #opentelemetry experiment in the playground: #fluentbit and #fluentd https://github.com/booyaa/opentelemetry-playground/blob/main/experiments/04-fluent/README.md
Amazon Linux 2023 に Fluentd を導入してS3バケットにログを配信
https://dev.classmethod.jp/articles/al2023-fluentd-s3/
データレイクハンズオンのログ生成部分が気になったので調査してみた
https://dev.classmethod.jp/articles/insights-into-data-lake-hands-on-logging/
#dev_classmethod #Amazon_EC2 #EC2 #Fluentd #Amazon_Data_Firehose #AWS
New blog post: https://blog.mei-home.net/posts/some-k8s-logging-changes/
I made a couple of changes to my initial logging pipeline.
I'm also *trying* to learn to write shorter blog posts on smaller things.
And I did not find out I made a loop through some superior metrics and alerting - no, I just heard the fans rev up without an obvious reason. 😂
It all started with wanting to massage the cnpg postgres logs a bit. And while doing that, I saw an "issue". There was a wayward "time" field which I had no use for. And now I'm revamping my entire log setup. 🤦
Days since I build an endless loop in my logging pipeline: 0 🎉
Can't figure out how to add mastodon-web.service logging to the Better Stack via Fluentd... Fluentd is completely new thing for me.
I mean this: sudo journalctl -u mastodon-web.service --all -f
I want to add that debug log to log tail. But even though everything is configured, up and running, logtail shows nothing. Nada.
Let's wait for the support to reply for a third time.
Did you ever wish somebody would write a 30 minute Epos on how to deploy a Fluentbit/FluentD/Loki logging stack? Today's your lucky day!
https://blog.mei-home.net/posts/k8s-migration-6-logging/
I call it "Oooops", alternative title "I know exactly why this happened. I knew it would happen before it happened, in fact".
Fluentd's stdout going entirely haywire because I feed Fluentd's logs through the log pipeline and because I've got it unparsed right now, it goes to stdout - rinse and repeat. 😅
Okay, got my logging setup properly primed now - only sending the logs which haven't been properly parsed yet to stdout so I can slowly implement their parse filters, and everything else gets forwarded to Loki for storage.
Ceph is almost done, next is going to be the control plane logs.
Exciting #breaking news today!
#chronosphereio acquiring #calyptiaIainc, the founders of #CNCF projects #fluentd and #fluentbit providing cloud native log transformation and optimization capabilities into its cloud native observability platform.
https://chronosphere.io/learn/chronosphere-crowdstrike-announcement/
From this week's ADMIN Update newsletter, Artur Skura explores Fluentd and Fluent Bit to help unify data collection and consumption https://www.admin-magazine.com/Archive/2023/77/A-modern-logging-solution #Fluentd #FluentBit #OpenSource #logging #debugging #monitoring #troubleshooting #FOSS #data #LogManagement
Have you missed #ObservabilityDay at #KubeCon yesterday?
#FluentBit creator Eduardo Silva Pereira shared exciting updates on my fireside chat with him at OpenObservability Talks, about the v2.2 release, about a new secret project and cool UI 🤫 and more:
📺 https://www.youtube.com/watch?v=V02Ctv0Rtg8&t=2313s
or check out the TL;DR post: https://lnkd.in/djd_eHjb
#KubeConNA #DevOps #kubecon2023 #kubecon23 #fluentd #opensource #observability #cloudnative
Me watching millions of log events per day funnel into an #OpenSearch cluster via #fluentd.
I think I've finally tamed this #OpenSearch setup on this #Rancher #RKE2 cluster. Today's adventure was schema conflicts. Pods labeled with "app" while others are labeled with app.kubernetes.io cause a problem for inputs as it looks to OpenSearch like there's a string where an object should be and the flatten hashes on the #fluentd output wasn't quite enough to cut it, but the dedot filter brought it in the rest of the way there.