#CyberRiskManagement

2026-02-27

Third-party breach, 38M impacted, European e-commerce sector.
ManoMano disclosed unauthorized access linked to a subcontracted customer support provider. Exposed data reportedly includes PII and support communications.
Authorities notified: CNIL, ANSSI.
Passwords not reportedly accessed.
Subcontractor access revoked.

Key risk vectors:
– SaaS support platforms
– Vendor access governance
– Over-retention of ticketing data
– Centralized customer communication logs
– Supply chain attack surface expansion

This case reinforces that vendor monitoring must go beyond contractual clauses — continuous assessment, least privilege enforcement, data minimization strategies.

How mature is your third-party risk telemetry?
Engage below.

Source: bleepingcomputer.com/news/secu

Follow @technadu for high-signal infosec reporting.

Repost to amplify awareness across the security community.

#Infosec #ThirdPartyRisk #VendorSecurity #SupplyChainSecurity #DataBreach #GDPRCompliance #EcommerceSecurity #CyberRiskManagement #SecurityOperations #GRC

European DYI chain ManoMano data breach impacts 38 million customers
J. R. DePriest :verified_trans: :donor: :Moopsy: :EA DATA. SF:jrdepriest@infosec.exchange
2026-02-25

I thought I might post an actual Cyber Security / InfoSec thing for once.

"Visibility without consequences is not governance."

https://www.csoonline.com/article/4136995/boards-dont-need-cyber-metrics-they-need-risk-signals.html

This is a great article.

A large portion of my job is quantifying risk and turning it into numbers to help prioritize vulnerabilities, pen test findings, CNAPP reports, compliance failures,, and misconfigurations. I use all kinds of values to calculate "a number" for each finding. I'll probably throw up my methodology on gist soon because I'd like feedback and ideas for how to make it better. Incidentally, is there a gist equivalent on Codeberg?

With that said, this article talks about all the things that "a number" cannot do and all the other important things the board and other stakeholders and decision makers at that level should know.

There are lots of quotable lines, but my favorite, the one I'd like on a T-shirt or hanging on posters in every break room is: "Visibility without consequences is not governance."

It's important because we run up against it time and time again. A business line WONTFIX so they get an exception for X months (or years). That number no longer counts against them. As my boss likes to joke, "we'll just tell the malicious actors we have an exception and ask them not to exploit it." That doesn't work. It hides risk. But when all you care about is "a number" then fixing that number becomes the goal, not fixing the underling risk.

Again, this is a good article. Read it. Agree with it. Gnash your teeth that you can't do the things it suggests and that your board would never go for it. Or, more likely, your board will never know this is an option because the C-level execs are too terrified of rocking the boat.

#InfoSec #Metrics #GRC #CyberSecurity #VulnerabilityMetrics #ITRisk #ITRiskManagement #ITSecurity #CyberRisk #CyberRiskManagement

The Brutal Truth About “Trusted” Phishing: Why Even Apple Emails Are Burning Your SOC

1,158 words, 6 minutes read time.

I’ve been in this field long enough to recognize a pattern that keeps repeating, no matter how much tooling we buy or how many frameworks we cite. Every major incident, every ugly postmortem, every late-night bridge call starts the same way: someone trusted something they were conditioned to trust. Not a zero-day, not a nation-state exploit chain, not some mythical hacker genius—just a moment where a human followed a path that looked legitimate because the system trained them to do exactly that. We like to frame cybersecurity as a technical discipline because that makes it feel controllable, but the truth is that most real-world compromises are social engineering campaigns wearing technical clothing. The Apple phishing scam circulating right now is a perfect example, and if you dismiss it as “just another phishing email,” you’re missing the point entirely.

Here’s what makes this particular scam dangerous, and frankly impressive from an adversarial perspective. The victim receives a text message warning that someone is trying to access their Apple account. Immediately, the attacker injects urgency, because urgency shuts down analysis faster than any exploit ever could. Then comes a phone call from someone claiming to be Apple Support, speaking confidently, calmly, and procedurally. They explain that a support ticket has been opened to protect the account, and shortly afterward, the victim receives a real, legitimate email from Apple with an actual case number. No spoofed domain, no broken English, no obvious red flags. At that moment, every instinct we’ve trained users to rely on fires in the wrong direction. The email is real. The ticket is real. The process is real. The only thing that isn’t real is the person on the other end of the line. When the attacker asks for a one-time security code to “close the ticket,” the victim believes they’re completing a security process, not destroying it. That single moment hands the attacker the keys to the account, cleanly and quietly, with no malware and almost no telemetry.

What makes this work so consistently is that attackers have finally accepted what many defenders still resist admitting: humans are the primary attack surface, and trust is the most valuable credential in the environment. This isn’t phishing in the classic sense of fake emails and bad links. This is confidence exploitation, the same psychological technique that underpins MFA fatigue attacks, helpdesk impersonation, OAuth consent abuse, and supply-chain compromise. The attacker doesn’t need to bypass controls when they can persuade the user to carry them around those controls and hold the door open. In that sense, this scam isn’t new at all. It’s the same strategy that enabled SolarWinds to unfold quietly over months, the same abuse of implicit trust that allowed NotPetya to detonate across global networks, and the same manipulation of expected behavior that made Stuxnet possible. Different scale, different impact, same foundational weakness.

From a framework perspective, this attack maps cleanly to MITRE ATT&CK, and that matters because frameworks are how we translate gut instinct into organizational understanding. Initial access occurs through phishing, but the real win for the attacker comes from harvesting authentication material and abusing valid accounts. Once they’re in, everything they do looks legitimate because it is legitimate. Logs show successful authentication, not intrusion. Alerts don’t fire because controls are doing exactly what they were designed to do. This is where Defense in Depth quietly collapses, not because the layers are weak, but because they are aligned around assumptions that no longer hold. We assume that legitimate communications can be trusted, that MFA equals security, that awareness training creates resilience. In reality, these assumptions create predictable paths that adversaries now exploit deliberately.

If you’ve ever worked in a SOC, you already know why this type of attack gets missed. Analysts are buried in alerts, understaffed, and measured on response time rather than depth of understanding. A real Apple email doesn’t trip a phishing filter. A user handing over a code doesn’t generate an endpoint alert. There’s no malicious attachment, no beaconing traffic, no exploit chain to reconstruct. By the time anything unusual appears in the logs, the attacker is already authenticated and blending into normal activity. At that point, the investigation starts from a place of disadvantage, because you’re hunting something that looks like business as usual. This is how attackers win without ever making noise.

The uncomfortable truth is that most organizations are still defending against yesterday’s threats with yesterday’s mental models. We talk about Zero Trust, but we still trust brands, processes, and authority figures implicitly. We talk about resilience, but we train users to comply rather than to challenge. We talk about human risk, but we treat training as a checkbox instead of a behavioral discipline. If you’re a practitioner, the takeaway here isn’t to panic or to blame users. It’s to recognize that trust itself must be treated as a controlled resource. Verification cannot stop at the domain name or the sender address. Processes that allow external actors to initiate internal trust workflows must be scrutinized just as aggressively as exposed services. And security teams need to start modeling social engineering as an adversarial tradecraft, not an awareness problem.

For SOC analysts, that means learning to question “legitimate” activity when context doesn’t line up, even if the artifacts themselves are clean. For incident responders, it means expanding investigations beyond malware and into identity, access patterns, and user interaction timelines. For architects, it means designing systems that minimize the blast radius of human error rather than assuming it won’t happen. And for CISOs, it means being honest with boards about where real risk lives, even when that conversation is uncomfortable. The enemy is no longer just outside the walls. Sometimes, the gate opens because we taught it how.

I’ve said this before, and I’ll keep saying it until it sinks in: trust is not a security control. It’s a vulnerability that must be managed deliberately. Attackers understand this now better than we do, and until we catch up, they’ll keep walking through doors we swear are locked.

Call to Action

If this breakdown helped you think a little clearer about the threats out there, don’t just click away. Subscribe for more no-nonsense security insights, drop a comment with your thoughts or questions, or reach out if there’s a topic you want me to tackle next. Stay sharp out there.

D. Bryan King

Sources

MITRE ATT&CK Framework
NIST Cybersecurity Framework
CISA – Avoiding Social Engineering and Phishing Attacks
Verizon Data Breach Investigations Report
Mandiant Threat Intelligence Reports
CrowdStrike Global Threat Report
Krebs on Security
Schneier on Security
Black Hat Conference Whitepapers
DEF CON Conference Archives
Microsoft Security Blog
Apple Platform Security

Disclaimer:

The views and opinions expressed in this post are solely those of the author. The information provided is based on personal research, experience, and understanding of the subject matter at the time of writing. Readers should consult relevant experts or authorities for specific guidance related to their unique situations.

#accountTakeover #adversaryTradecraft #ApplePhishingScam #attackSurfaceManagement #authenticationSecurity #breachAnalysis #breachPrevention #businessEmailCompromise #CISOStrategy #cloudSecurityRisks #credentialHarvesting #cyberDefenseStrategy #cyberIncidentAnalysis #cyberResilience #cyberRiskManagement #cybercrimeTactics #cybersecurityAwareness #defenseInDepth #digitalIdentityRisk #digitalTrustExploitation #enterpriseRisk #enterpriseSecurity #humanAttackSurface #identityAndAccessManagement #identitySecurity #incidentResponse #informationSecurity #MFAFatigue #MITREATTCK #modernPhishing #NISTFramework #phishingAttacks #phishingPrevention #securityArchitecture #SecurityAwarenessTraining #securityCulture #securityLeadership #securityOperationsCenter #securityTrainingFailures #SOCAnalyst #socialEngineering #threatActorPsychology #threatHunting #trustedBrandAbuse #trustedPhishing #userBehaviorRisk #zeroTrustSecurity

A cybersecurity analyst in a dark command center analyzing deceptive trusted phishing attacks symbolized by a chessboard and security dashboards.
hackmachackmac
2025-07-29

IT-Sicherheit & Cyberversicherung: Pflicht statt Kür!
Im gemeinsamen Interview mit Robert Brockbals, Geschäftsführer der SIEVERS-GROUP, sprechen wir über eine der drängendsten Fragen unserer Zeit: Warum Cyberresilienz kein Luxus ist – sondern überlebenswichtig. Zum Interview: 🔗 sievers-group.com/blog/warum-i

Steganography: The Art of Hiding Malware Right Under Your Nose

1,732 words, 9 minutes read time.

Steganography: Cryptography history

Amazon Affiliate Link

About six years ago — back before COVID turned everything upside down — I was deep-diving into Microsoft’s Power Platform, that sprawling suite of tools designed to help businesses build apps and automate workflows with ease. During that exploration, I uncovered a pretty fascinating vulnerability. It wasn’t a simple “click and exploit” kind of hole, but with the right conditions and a bit of clever maneuvering, I found a way to modify and execute code on SharePoint as another user entirely.

What made that experience so gripping wasn’t just the technical challenge. It was the realization that sometimes, it’s not the loud, flashy malware that gets you. It’s the subtle, elegant gaps in logic — the quiet backdoors that let attackers slip in unnoticed.

That’s exactly why exploits like steganography catch my attention. This ancient art of hiding secret messages in plain sight has evolved for the digital age. Instead of ink and paper, attackers now tuck malicious code inside everyday files — images, wallpapers, documents — right under your nose. No alarms, no obvious signs, just malware chilling quietly where you’d least expect it.

So today, let’s dive into how hackers pull off these sneaky attacks, why they’re so hard to spot, and most importantly, how you can keep your systems safe without losing your mind. Because in cybersecurity, staying curious and prepared is the best defense — and sometimes the coolest part of the job.

So, what the heck is steganography anyway?

Let’s get nerdy for a sec. Steganography is basically the art of sneaking secret data inside something that looks normal. The word comes from Greek roots meaning “covered writing.” Long before computers, people were hiding tiny messages in wax tablets, tattooing them on slaves’ scalps (gross but effective), or writing invisible ink love letters that only appeared under heat.

Fast forward to the digital era. Today, steganography usually means tucking malicious code inside innocent-looking files—like JPEGs, PNGs, MP3s, or even PDFs.

Unlike encryption, which screams, “Hey, I’m hiding something!” (even if the contents are scrambled), steganography tries to avoid suspicion altogether. It’s more like slipping a fake grocery list to your buddy that actually details your plan to raid the cookie jar after midnight. To everyone else? Just another boring shopping note.

How do hackers pull off this cyber-magic?

Now, let’s break down the trick that’s got the hacking world buzzing. Cybercriminals often use something called LSB (Least Significant Bit) steganography. In layman’s terms, they tweak the smallest bits of image data that our eyes can’t perceive.

Think of an image as a giant spreadsheet of pixel colors—millions of tiny red, green, and blue (RGB) values. Adjust the last bit of that RGB data from a 1 to a 0? The human eye won’t notice. But a decoding script sure will.

John Hammond, an absolute wizard in the cybersecurity content space (and whose awesome YouTube video inspired this whole breakdown—watch it here), recently showed how malware could be buried inside a normal desktop wallpaper. His demo: a slick “innocent” image hides encrypted shellcode. When decoded and executed, it pops open a malicious process. Pretty elegant—and terrifying.

According to Kaspersky, hackers love this because it lets them “pass malicious content off as harmless data, thus bypassing traditional detection systems.” Imagine your favorite wrench suddenly refusing to fit a bolt—not because the bolt changed, but because it was secretly swapped for a malicious clone with the same measurements. That’s the cybersecurity equivalent here.

Why do cyber crooks even bother with this?

Simple. Traditional antivirus programs look for suspicious behaviors or known malware signatures. They don’t always scrutinize the actual pixel guts of an image file. So by hiding malware in a .png or .bmp, attackers can slip right past gatekeepers.

CSO Online points out that steganography has surged because it avoids raising alarms. It’s “like smuggling something through customs in your shoe—if the scanner’s not tuned to look inside footwear, you’re golden.”

This technique is also devilishly flexible. It works over social media, email attachments, file shares, cloud drives. Basically anywhere you can upload and download pictures, the door is open. In one nasty example, the XWorm remote access Trojan stashed its payload inside images to sneak past email defenses—The Hacker News did a great write-up on it.

How can you protect yourself (without swearing off wallpapers forever)?

Alright, here’s where we get practical. First, don’t panic. I still use cool wallpapers every day. But I also keep my wits about me.

For most casual users, the biggest risks come from downloading images off sketchy sites, pirated software bundles, shady Discord servers, or random email attachments. If it looks too good to be true—like “Free RTX 4090 Wallpapers EXCLUSIVE!!” hosted on some rando .ru domain—it probably is.

Basic cyber hygiene is your first line of defense. Keep your OS and all software up to date so known vulnerabilities get patched. Use a reputable antivirus or endpoint security suite. Many modern tools do more than scan executables—they watch for suspicious memory activity, rogue scripts, or weird outbound connections. That helps catch malware even if it tries to wriggle out of a hidden image and run.

Want to level up? If you’re more of a power user, consider using image sanitization tools. These can strip out metadata, convert images into formats that don’t retain hidden stego data, or even rebuild the file entirely. Think of it as pressure-washing your wallpaper before hanging it on your wall.

You could also isolate downloads in a sandbox or virtual machine first. That way, if something does try to execute, it’s trapped in a safe bubble—like a zoo enclosure for digital tigers.

What about the hardcore detection stuff?

If you’re deep into cybersecurity—maybe running your own labs or defending an organization—then tools like Content Disarm and Reconstruction (CDR) come in handy. These essentially break down and rebuild incoming files to strip any hidden nasties, while still delivering a usable document or image.

Network monitoring is also key. Tools that inspect data flows (IDS/IPS) might pick up weird encrypted blobs inside image files being exfiltrated from your network—like catching a burglar not because they broke the window, but because they’re awkwardly tiptoeing through your backyard with your TV under their arm.

There are also steganalysis tools that look for statistical anomalies in images—basically forensic microscopes that can spot tiny pixel irregularities. Not foolproof, but every extra layer helps.

That wallpaper exploit demo: what John Hammond uncovered in the wild

Circling back to John Hammond’s excellent video — this wasn’t just a fun lab experiment or hypothetical scenario. John was actually analyzing a real-world malware sample found in the wild, where attackers had hidden malicious data inside an innocent-looking wallpaper image.

His breakdown showed how threat actors stuffed encoded configuration data into the pixels of the image. Later, the malware retrieved that image, parsed it, and used the extracted data to help build out its next-stage payload. It’s a smart way to stay under the radar: most antivirus tools don’t scan the pixel data of a wallpaper for hidden instructions meant to control malware.

Watching John reverse-engineer this is equal parts fascinating and alarming. It’s like seeing a locksmith show you exactly how burglars might pick the lock on your front door — suddenly, that “harmless” image file looks a whole lot more suspicious.

If you want to see the full demo (and trust me, it’s worth it), check out John Hammond’s YouTube video here. It’s a top-notch real-world example of why cybersecurity folks always say: trust, but verify — even when it comes to pretty wallpapers.

The big takeaway: Don’t be the low-hanging fruit

Hackers are opportunists. Sure, there are advanced state-level APTs who might specifically target you, but most crooks are after easy marks. Keep your systems patched, be suspicious of unexpected downloads, and monitor your network for weird behavior.

Also, if you’re running a business, invest in employee training. Phishing is still the #1 way malware gets through—someone on the sales team double-clicks “Invoice_OMG.png” from an unknown sender, and boom, you’re on the nightly news. Not a great look.

Want to geek out more?

If you’re hungry for the gritty technicals, you can explore guides on how steganography works, plus defenses and detection, from sites like Imperva, Fortra, and SentinelOne. There’s no shortage of reading, and trust me, it’s a rabbit hole worth diving into.

Also, huge hat tip again to John Hammond. Check out his full video breakdown here on YouTube. It’s like a magician revealing exactly how the trick works—super insightful and definitely worth the watch.

Wrap-up: Stay sharp, stay curious

So that’s the skinny on steganography, the sneaky malware tactic hiding right under your nose—literally on your desktop background. The next time you download a killer wallpaper or any random file, pause for a heartbeat and think, “Could this be more than it seems?”

Want more juicy cybersecurity deep dives, fresh threat breakdowns, and the occasional bad hacker joke? Subscribe to our newsletter below. Or drop a comment and tell me your wildest malware encounter—I’d love to hear your story. If you’re wrestling with a weird security problem, feel free to reach out directly. Always happy to talk shop.

Stay safe out there—and hey, keep your wallpapers awesome (just maybe run ‘em through a sanity check first).

D. Bryan King

Sources

Disclaimer:

The views and opinions expressed in this post are solely those of the author. The information provided is based on personal research, experience, and understanding of the subject matter at the time of writing. Readers should consult relevant experts or authorities for specific guidance related to their unique situations.

Related Posts

Rate this:

#1 #advancedPersistentThreats #codeExecutionExploit #cyberAttackMitigation #cyberAttackTechniques #cyberDefenseStrategies #cyberIntrusionMethods #cyberRiskManagement #cyberThreatIntelligence #cyberThreatPrevention #cyberattackAwareness #cyberattackExamples #cyberattackPrevention #cybercrimeDefense #cybersecurityAwareness #cybersecurityBestPractices #cybersecurityEducation #CybersecurityTips #digitalForensics #digitalSteganography #EndpointSecurity #exploitDetection #hackerTactics #hackerTricks #hiddenMalware #hidingMalwareInImages #imageSteganography #informationSecurity #maliciousPayloadHiding #malwareAnalysis #malwareCommunicationHiding #malwareDeliveryMethods #malwareDetection #malwareEvasion #malwareHidingMethods #malwareHidingTechniques #malwareInWallpapers #malwareObfuscation #malwarePayloadEmbedding #malwarePayloadExtraction #malwarePayloadLoading #malwarePayloads #malwarePreventionStrategies #malwareStealthTechniques #networkSecurity #PowerPlatformVulnerability #realWorldExploits #SharePointExploit #stealthMalware #steganographicMalware #steganographyMalware #threatActorTechniques #threatHunting #wallpaperMalware

Boston Managed ITbmit
2025-01-09

Springfield insurance company faces a lawsuit over a data breach. Learn how to manage threat exposure in your business.

zurl.co/Wj338

Essential Aspects in Ethical Leadership Approaches in the Cybersecurity Niche
Ethical leadership has become a cornerstone in safeguarding digital assets and ensuring the integrity of sensitive information.
thecybersecurityleaders.com/es

Anonymous 🐈️🐾☕🍵🏴🇵🇸 :af:youranonriots@kolektiva.social
2024-07-01

🚨 HACKMANAC GLOBAL CYBER ATTACKS REPORT 2024 🚨

📊 Our Global Cyber Attacks Report 2024 is now available!

📈 In this 40+ page study we analyze more than 7,000 known attacks that occurred globally in 2023 and compare them to those of the previous five years (2018-2022), highlighting key trends such as:

🔍 Evolution of cyber threats
🔍 Most targeted industries
🔍 Most used attack techniques
🔍 Types of attackers
🔍 Most dangerous cybercrime gangs
🔍 Attack impact by industry, attacker, etc.

📥 Visit our download area for a free copy of the report ⤵
hackmanac.com/hackmanac-global

#Hackmanac #CyberSecurityReport #CyberThreatMonitoring #CyberAttacks2024 #CyberThreats #CyberRiskManagement

2023-07-20

Tomorrow (Thurs, July 20) I'm hosting a webinar to share key findings from several years' worth of published research on vulnerability remediation. We have 8 data-packed reports to cover in ~30 minutes. To accomplish that, I've chosen two representative charts from each report - which was TOUGH!

Register here and let me know how you think I did: us02web.zoom.us/webinar/regist

#vulnerability #vulnerabilities #devops #devsecops #vulnerabilitymanagement #vulnerability #vulnerabilityassessment #vulnerabilityscanning #exposuremanagement #remediation #cyberriskmanagement #informationsecurity #infosec #appsec #applicationsecurity #appsecurity

2023-07-06

Excerpt from my latest Cyentia Institute blog post, “Patching, Fast and Slow”:

There are many ways one could measure how quickly vulnerabilities are patched. Most go with a simple average, but such point statistics are a poor representation of what’s really happening with remediation timeframes. Our favored method for this is survival analysis. I won’t get into the methodology here other than to say it tracks the “death” (remediation) of vulnerabilities over time to produce a curve that looks like the ones below comparing remediation speed among sectors.

The lesson? Get remediation strategy advice from your investment firm rather than your insurer, perhaps? We could ask a bunch of other questions about why certain organizations or industries struggle more than others to address vulnerabilities…but this isn’t that post. But I do suspect the “system” guiding the patching strategies of these organizations makes a big difference in the shape of their remediation curves.

You may have caught the title of this post being a reference to Daniel Kahneman’s book “Thinking, Fast and Slow.” That was partly because it’s catchy and fits the topic. But I also think there’s a parallel to be drawn from one of the main points of that book. Kahneman describes two basic types of thinking that drive human decision-making:

System 1: Fast, automatic, frequent, emotional, stereotypic, unconscious

System 2: Slow, effortful, infrequent, logical, calculating, conscious

Maybe you see where I’m headed here. I’m not saying we can boil all patching down to just two different approaches. But my experience and research support the notion that there are two broad systems at play. Many assets lend themselves to automated, fast deployment of patches without much additional preparation or evaluation (e.g., newer versions of Windows and OSX). Those fall under System 1 patching.

Other assets require manual intervention, testing, risk evaluation, or additional effort to deploy. That fits the System 2 definition well. The more your organization has to engage in System 2 rather than System 1 patching, the slower and shallower those remediation timelines will appear. Like normal decisions, we can’t do everything via System 1…some assets need that extra System 2 treatment. But problems (and/or delays) arise when there’s a mismatch between the system used and the decision (remediation) scenario.

My takeaway for vulnerability management programs? Use System 1 patching as much as possible and System 2 patching only where necessary.

See all the analysis leading up to this conclusion in the full post: cyentia.com/patching-fast-and-

#patchmanagement #vulnerabilitymanagement #vulnerabilityassessment #vulnerabilities #exposuremanagement #riskmanagement #cyberriskmanagement #remediation #cve #appsec #appsecurity #secops #securityoperations #cybersecurity #infosec #infosecurity

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst