Implementing my own Time Series Database: Core Structure (Part 1)
https://blog.libove.org/posts/implementing-my-own-time-series-database--core-structure--part-1-/
Implementing my own Time Series Database: Core Structure (Part 1)
https://blog.libove.org/posts/implementing-my-own-time-series-database--core-structure--part-1-/
Implementing my own Time Series Database: Data Layout (Part 1)
https://blog.libove.org/posts/implementing-my-own-time-series-database--data-layout--part-1-/
🚀 #Kubernetes #monitoring Made Easy with VictoriaMetrics Cluster
Our technical #guide walks you through setting up a VictoriaMetrics cluster using Helm charts, collecting k8s metrics via service discovery, and visualizing your data effortlessly.
🟣 What you'll learn:
✅ Deploying #VictoriaMetrics in Kubernetes with #Helm
✅ Scraping #metrics from #k8s components
✅ Storing & visualizing data in VictoriaMetrics #tsdb
https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-cluster/
🧊 #KubeCon + #CloudNativeCon Europe 2025 is coming to London 🇬🇧, April 1-4
#VictoriaMetrics is a Silver Sponsor; get in touch at our booth, and learn more about #tsdb logs management, and #Observability with our experts. Stay tuned! 📡
https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/
Metrics are usually stored with a nano timestamp in time series databases (#TSDB). ⌚
During an analysis of metrics stored in an #InfluxDB Time Series Database, a third party company got involved. Unfortunately they were not able to understand what a Nano Timestamp is and needed a translation to "human" date format. 🙍♂️
Luckily it's easy to change the date/time format. In my latest #blog article I describe how this can be done on any #Linux machine. 💡
An update on the #opensource exit of InfluxDB:
DB Gossip...
Sad but i saw this coming. InfluxDB is officially dead.
V3 won't come as a community version anymore.
Source: https://www.reddit.com/r/influxdb/comments/1i0fmkf/comment/m7in3k3/
Hundrets of thousands FOSS projects integrated InfluxDB 1 or 2. It gained high popularity because of the light footprint, incredibly good data compaction and fast query engine.
V3 "core" (formerly edge) will only have 72hrs of Data retention (lol?). Other than that you will have to buy enterprise or use their cloud offering.
Reader Comment – TSDB Screening for ChemLock – Reader suggestions always welcomed – https://tinyurl.com/36nhpx2p #ReaderComment #TSDB
Мониторинг бизнес-процессов с помощью OpenTelemetry
Если у вас большой сложный продукт, который разрабатывают несколько команд, бывает трудно избежать ситуации, когда продакшен лежит, бизнес стоит, а инженеры несколько часов перекидывают стрелки друг на друга. При этом каждый считает, что проблема на другой стороне. Чтобы найти верное решение, нужен не столько подходящий инструмент, сколько общий подход для мониторинга всех частей приложения. В этой статье расскажу, как мы объединили несколько разных команд разработки Райффайзен Онлайн общим Observability и с помощью исключительно технических метрик отслеживаем здоровье бизнес-процессов. Как всё это помогает мгновенно находить первопричину сбоя. Как устроен OpenTelemetry и как с его помощью рассчитать доступность приложения в девятках, а также MTTR (Mean Time to Recovery).
https://habr.com/ru/companies/oleg-bunin/articles/865690/
#opentelemetry #мониторинг #endtoend_testing #999 #трейсинг #collector #zscore #tsdb #mttr #availability
Потребление ресурсов в Prometheus: кто виноват и что делать (обзор и видео доклада)
У Prometheus есть серьёзный недостаток — чрезмерное потребление ресурсов. Проблема может заключаться в недостаточном понимании инструмента и его неверном использовании. А Prometheus требует грамотного управления метриками и лейблами. В своем докладе технический директор Deckhouse Observability Platform Владимир Гурьянов выяснил, кто виноват в этом и что делать.
https://habr.com/ru/companies/flant/articles/848968/
#prometheus #tsdb #monitoring #devops #devopsconf #deckhouse #метрики #потребление_ресурсов #lables #mimirtool
Welp, I didn't think I'd get excited about a Postgres extension today, but here we are!
interesting, detailed objective TSM-Bench benchmark of time-series database systems (TSDBs) including #opensource TSDBs #ClickHouse, #Apache #Druid, #InfluxDB, #TimescaleDB, #MonetDB, #QuestDB and commercial #TSDB #eXtremeDB
Hat jemand Erfahrungen in der Langzeit Archivierung von #TSDB also Zeitreihen Datenbanken?
#Prometheus sagt in seiner Doku folgendes:
- With proper architecture, it is possible to retain years of data in local storage.
Aber auch:
- Again, Prometheus's local storage is not intended to be durable long-term storage; external solutions offer extended retention and data durability.
Also was stimmt jetzt?
Und wenn doch, wie viele Jahre sind mit "years of data" gemeint?
Ich bräuchte 10+ Jahre.
Keep #k8s costs under control with #OpenCost and #VictoriaMetrics
https://victoriametrics.com/blog/monitoring-kubernetes-costs-with-opencost-and-victoriametrics/
Keep #k8s costs under control with #OpenCost and #VictoriaMetrics
https://victoriametrics.com/blog/monitoring-kubernetes-costs-with-opencost-and-victoriametrics/
How to make high cardinality work in time series databases: Part 1
https://last9.io/blog/how-to-make-high-cardinality-work-in-time-series-databases-part-1/
CRS Reports – Week of 7-15-23 – TSDB Legal Challenges – The Terrorist Screening Database is used for vetting by many programs including TWIC, HME, and CFATS - https://tinyurl.com/5n82se6p #CRS #TSDB
Never-firing #alerts : What they are and how to deal with them
https://victoriametrics.com/blog/never-firing-alerts/
#sre #devops #observability #monitoring #oncall #victoriametrics #prometheus #tsdb #vmalert
Monitoring benchmark: how to generate 100 million samples/s of production-like data
While the fact that VictoriaMetrics can handle data ingestion rate at 100 million samples per second for one billion of active time series is newsworthy on its own, the benchmark tool used to generate that kind of load is usually overlooked.