I created some code to reformat the output of ip --brief to look a bit cleaner (IMO).
You can find the code in https://github.com/SebastianMeisel/mybashrc .
What should your mutexes be named?
https://gaultier.github.io/blog/what_should_your_mutexes_be_named.html
#programming #golang #structural-search #awk
Using Awk to find out the FBI was paying scrapers to find Torswats
https://blog.freespeechextremist.com/blog/fse-vs-fbi.html
#HackerNews #Awk #FBI #Scrapers #TorSwats #Cybersecurity #FreeSpeech
Well those #bash and #awk scripts that I wrote to check to see if the SFTP server has files for us worked in detecting that files didn't arrive in our expected window.
I wish the team responsible would have a tool to monitor when the server has issues... Wait, silly me! They are using a tool (us) so they don't have to buy, install, configure and run such a tool. They assume that the server is working until we send an email that they have problems.
Scripts are ugly, but they work and clearly documented on what they do and the parameters. I wrote them so that it is easy to use the script with different parameters for any additional files we need to monitor.
On migrating from Pocket, file conversions and link rot.
Learned a few things about #awk and #gawk and ended up picking #python to get it done.
https://tiagoafpereira.net/blog/posts/2025-06-04-migrating-from-pocket/
TIL that modifying NF in #awk has an instant effect, so if you do
$ echo a b c | awk '{while (NF){print $(NF--)}}'
the post-decrementing removes the entry before the variable gets accessed, resulting in it printing blanks, so instead you have to access the variable before doing the post-decrement:
$ echo a b c | awk '{while (NF){print $NF; NF--}}'
to print each item in reverse.
Which seems weird given what I understand about how post-decrement is *supposed* to work.
(HT: @drscriptt whose #awk sent me down this rabbit-hole of learning)
@gumnos That didn't quite work, but I did get it to work.
Here's what I have:
awk -F. '{for (C=NF;C; C--){printf "%s.", $C}; printf "oid."}'
NF-1 was missing a component.
I also added curly braces around the 1st printf to make it more obvious what the for loop applied to.
I wasn't aware that the for loop only applied to the very next instruction. #TIL #AWK
Well I did a thing.
I created a #DNS zone; .oid, on my DNS server #OIDs.
I can now easily look up OID values with dig (et al.):
% dig +short txt 2.3.7.5.5.1.6.3.1.oid.
To look up OIS 1.3.6.1.5.5.7.3.2.
I also wrote a one* line shell script to make doing the lookups easier:
\dig +short txt $(echo ${1} | awk -F. '{for (C=NF; C>1; C--){printf "%s.", $C}; printf "%s.oid.", $1}') | sed 's/"//g' #awk
So I can now run:
% oidlookup 1.3.6.1.5.5.7.3.2
and get the following output:
{iso(1) identified-organization(3) dod(6) internet(1) security(5) mechanisms(5) pkix(7) kp(3) id-kp-clientAuth(2)}
Exploring the Birthday paradox (inefficiently) in Bash:
for j in {1..1000};do for i in {1..23};do date +%m-%d --date="+$RANDOM days";done|awk '{a[$0]++}END{s=0;for(i in a){if(a[i]>1){s=1;break}}print s}';done|sort|uniq -c
Theoretical result would be 493 0s and 507 1s. There may be modulo bias from mapping the full range of $RANDOM to days of the year. Yes, it does call GNU `date` 23,000 times ...
#awk is a perennial. One of the best tools ever for quick processing of large structured data. Courses at the Open Risk Academy introduce awk in the context of US Agency mortgage data analysis. Caveat: you must obtain your own copy of the data #opensource www.openriskacademy.com/course/view....
🌘 Sqawk:SQL 與 Awk 的融合:將 SQL 應用於文字型資料檔案
➤ 使用 SQL 簡化文本資料處理
✤ https://github.com/jgarzik/sqawk
Sqawk 是一個結合了 SQL 的查詢能力和 Awk 的文本處理功能的命令行工具。它能夠讀取 CSV、TSV 等分隔符號檔案,將資料載入記憶體中的表格,執行 SQL 查詢,並將結果輸出到控制檯或檔案。Sqawk 支援 SELECT、INSERT、UPDATE、DELETE 等 SQL 操作,以及 WHERE 子句篩選、排序、聚合等功能,還能處理多個表格的聯結操作,並提供自定義分隔符號和數據類型推斷等特性。
+ 「這個工具太棒了!我一直想用 SQL 來處理 CSV 檔案,現在終於有工具可以實現了。」
+ 「對於需要快速分析大量文本數據的人來說,Sqawk 是一個非常實用的工具。」
#工具 #開源 #SQL #Awk #資料處理
remove commas in double quotes #commandline #bash #textprocessing #sed #awk
remove commas in double quotes #commandline #bash #scripts #sed #awk
Since they are shutting down soon, yesterday evening I downloaded my Pocket data and, because I have weird compulsions, decided to use #awk to parse the very simple CSV file into a very simple Markdown file.
And then, because my previous, equally niche, AWK post had been oddly popular, I wrote about this new script too: https://blog.sgawolf.com/post/2025-05-22-pocket-parser
#awk is still alive `mix test | awk 'BEGIN { print_next = 0 } { if (print_next) { print; print_next = 0 } else if ($0 ~ /^ *[0-9]+\)/) { print; print_next = 1 } }'
`mix test` is for running elixir tests. I've got too many failures at the moment and wanted to see all of them, with the file / line number. cc @kevlin
This is not awk I could have easily written by hand.
(but admittedly, for stuff with conditionals etc, there is a lot more #python in the training sets).
Enjoyed @kevlin on 'past and future of programming languages' yesterday. I am an outlier, I programmed in three languages outside the top 20 this week - #elixir, #lisp (emacs) and #awk (it comes bundled with most linux, and I have it on mac as well). Strangely enough, a local #llm makes the latter two more approachable. qwen3:30 is quite good at generating awk, and good enough to get almost-working #emacs lisp.
I wrote a Literate Programming tool in AWK
http://patpatpat.xyz/data/lit/lit.awk
http://patpatpat.xyz/data/lit/example.html
http://patpatpat.xyz/data/lit/example.lit
I dogfooded the script: the page and the script itself are generated from a single literate file. The page hopefully explains my thought process. Its a tiny script but feels quite dense!
(Feedback welcome)
I like using AWK, there is something quite charming and fun about programming within the limitations of PATTERN + ACTION
clear;awk -vL=$(tput lines) -vC=$(tput cols) 'BEGIN{srand();for(;;){s="\033[H";for(i=0;i<L;i++){for(j=0;j<C;j++){t=rand();s=s sprintf("\033[%d;%d;%dm%c",t<.2?1:(t<.4?4:0),30+int(rand()*8),40+int(rand()*8),33+int(rand()*94));}s=s"\n"}printf"%s",s}}'
behold. awk