mark carter

#engineering #infosec #opensource #investor board member and past #startup #founder - worked with great people at #Vimeo previously #salesforce EVP #aws GM #tesla CISO #google #microsoft 🤔 been around too long but still having fun ☺️ Twitter handle: @markcartertm

2025-05-15

Great opportunity 🧙 Seeking Staff security software engineer in #NYC #California #Texas to help protect the worlds leading Travel and expense AI platform 🛫
Reporting to the amazing Teja Myneedu Sr. Director of Security and Trust, , you will contribute significantly to building and scaling the security of Navan products. This position requires both advanced technical skills, strong communication skills, and the ability to influence people. You will be responsible for ensuring the continuous security of Navan customer-facing products and internal tools. You will focus on driving and advising risk remediation based on research, and developing strong partnerships with engineering and product teams to accelerate the release of the software with security by design.
lnkd.in/gPFe7ze3
lnkd.in/gNGqaQaJ
lnkd.in/gccqxuce #Hiring #Engineering #Infosec

mark carter boosted:
dansupdansup
2025-04-25

Check this out ✨

The new FediDB has been redesigned and refactored to leverage the 394 million data points we've collected and aggregated.

Imagine being able to easily find a server with open registration that is mature (3+ years old) and located in your region, with just a few clicks!

Looking forward to working with fedi devs to further improve this, and to finish open sourcing the entire platform 🚀

github.com/fedidb

2025-04-25

🛡️ Yale New Haven Health System (YNHHS) disclosed a data breach affecting the personal information of over 5.5 million patients. Compromised information includes names, dates of birth, addresses, phone numbers, emails, race/ethnicity, SSNs and medical record numbers.

securityweek.com/5-5-million-p #Infosec

2025-04-13

🛡️ Bank of America Discloses Data Breach After Customers’ Documents Disappear, Says Names, Addresses, Account Information and Social Security Numbers Affected dailyhodl.com/2025/04/12/bank- #Infosec

2025-04-10

Great to see authentication and authorization finally integrated into agentic AI 🛡️ Very excited about Agent2Agent Protocol (A2A) 👍 well written technical documentation. Recommended read google.github.io/A2A/#/documen #MachineLearning #Infosec

2025-03-27

🤔 NIST Trustworthy and Responsible AI
NIST AI 100-2e2025 - Adversarial Machine Learning
A Taxonomy and Terminology of Attacks and Mitigations nvlpubs.nist.gov/nistpubs/ai/N #Infosec

2025-03-25

Interesting 🤔 Large language model-powered AI systems achieve self-replication with no human intervention. Self-replication with no human intervention is broadly recognized as one of the principal red lines associated with frontier AI systems. While leading corporations such as OpenAI and Google DeepMind have assessed GPT-o3-mini and Gemini on replication-related tasks and concluded that these systems pose a minimal risk regarding self-replication, our research presents novel findings. Following the same evaluation protocol, we demonstrate that 11 out of 32 existing AI systems under evaluation already possess the capability of self-replication. In hundreds of experimental trials, we observe a non-trivial number of successful self-replication trials across mainstream model families worldwide, even including those with as small as 14 billion parameters which can run on personal computers. Furthermore, we note the increase in self-replication capability when the model becomes more intelligent in general. Also, by analyzing the behavioral traces of diverse AI systems, we observe that existing AI systems already exhibit sufficient planning, problem-solving, and creative capabilities to accomplish complex agentic tasks including self-replication. More alarmingly, we observe successful cases where an AI system do self-exfiltration without explicit instructions, adapt to harsher computational environments without sufficient software or hardware supports, and plot effective strategies to survive against the shutdown command from the human beings. These novel findings offer a crucial time buffer for the international community to collaborate on establishing effective governance over the self-replication capabilities and behaviors of frontier AI systems, which could otherwise pose existential risks to the human society if not well-controlled. arxiv.org/abs/2503.17378 #MachineLearning

2025-03-18

🛡️ 'Dead simple' hijacking hole in Apache Tomcat 'now actively exploited in the wild'. Authentication is not required to pull off an attack, and the end result is the ability to run arbitrary code on the targeted Tomcat server by miscreants, allowing them to access data among other nefarious things. "We've already seen this in operation by Chinese operators, and CISA [The US government's Cybersecurity and Infrastructure Security Agency] got in touch tonight and are going to add the exploit to its warning list," Ivan Novikov Wallarm's CEO, told The Register

theregister.com/2025/03/18/apa #Infosec

2025-03-15

😜 An AI Coding Assistant Refused to Write Code—and Suggested the User Learn to Do It Himself arstechnica.com/ai/2025/03/ai- #AI

2025-03-15

The tj-actions/changed-files #GitHub Action, which is currently used in over 23,000 repositories, has been compromised. In this attack, the attackers modified the action’s code and retroactively updated multiple version tags to reference the malicious commit. The compromised Action prints CI/CD secrets in GitHub Actions build logs. If the workflow logs are publicly accessible (such as in public repositories), anyone could potentially read these logs and obtain exposed secrets. There is no evidence that the leaked secrets were exfiltrated to any remote network destination.

stepsecurity.io/blog/harden-ru #Infosec

2025-02-28

Wayback Copilot: Using Microsoft's Copilot to Expose Thousands of Private Github Repositories 🛡️ Any information that was ever public, even for a short period, could remain accessible and distributed by Microsoft Copilot.
Research Findings
20,580 GitHub repositories were extracted during the research using Bing’s caching mechanism
16,290 organizations were affected by the Wayback Copilot including Microsoft themselves, Google Intel, Huwai PayPal IBM, Tencent and more
100+ Python and Node.js internal packages that could be vulnerable to dependency confusion were discovered
300+ private tokens, keys & secrets to GitHub, Hugging Face, GCP, OpenAI, etc. were exposed

lasso.security/blog/lasso-majo #Infosec

2025-02-26

👍 Protect your personal information and easily take action on outdated content in Search results. Seeing that others have published your personal information online can be stressful. Google’s newly-redesigned Results about you tool protects your privacy by scanning for results containing information like your phone number or address and helping you quickly remove them. Our new hub makes signing up easier than ever, and with proactive monitoring, we’ll do the hard work for you – alerting you if new results are found.
blog.google/feed/results-about #Infosec #Privacy

2025-02-26

57% of enterprise employees admit to entering high-risk information into publicly available generative AI assistants, exposing critical security gaps in enterprise AI usage 🛡️ )--Nearly seven out of 10 (68%) enterprise employees who use generative AI (GenAI) at work say they access publicly available GenAI assistants such as ChatGPT, Microsoft Copilot or Google Gemini through personal accounts, and more than half (57%) have admitted to entering sensitive information into them.
Surveyed employees admitted to entering the following types of information into publicly available GenAI assistants:

Personal data, such as names, addresses, emails and phone numbers (31%).
Product or project details, including unreleased details and prototypes (29%).
Customer information, including names, contact details, order history, chat logs, emails, or recorded calls (21%).
Confidential company financial information, such as revenue, profit margins, budgets, or forecasts (11%).
This happens despite nearly a third (29%) of employees acknowledging their companies have policies in place that prohibit them from inputting company, client or other sensitive information into GenAI assistants.

Regardless of the risks, many employees in the survey indicated that their company is falling short on providing them with information and training to use GenAI safely:

Only 24% of employees said their company requires mandatory AI assistant training.
44% said their company does not have AI guidelines or policies in place, or they don’t know if their company does.
50% said they are not sure if they're adhering to their company’s AI guidelines.
42% said there are no repercussions for not following their company’s AI guidelines.

businesswire.com/news/home/202 #Infosec #Legal

2025-02-26

🤔 TELUS Digital Survey Reveals Enterprise Employees Are Entering Sensitive Data Into AI Assistants More Than You Think. 57% of enterprise employees admit to entering high-risk information into publicly available generative AI assistants, exposing critical security gaps in enterprise AI usage businesswire.com/news/home/202 #Infosec

2025-02-16

👍 Microsoft Research OmniParser V2: Turning Any LLM into a Computer Use Agent microsoft.com/en-us/research/a #MachineLearning

2025-02-16

Powerful use of AI for security 🛡️ ProjectDiscovery Reinventing custom detections and vulnerability management. For those new to Nuclei, it’s an open-source vulnerability scanner that thrives on community-driven intelligence. Its template-based approach makes it incredibly flexible, allowing users to write detections for virtually any security risk, from misconfigurations to zero-day exploits.

Last year, we introduced the AI Template Editor, and the response was incredible with over 40 thousand templates created.

Today, with ProjectDiscovery v1, we’re making it even easier to create and manage custom Nuclei templates. From improvements to AI Template Editor to automating how security teams monitor for regressions, we’re committed to helping security teams protect their organizations from every type of security risk.

projectdiscovery.io/blog/reinv

#Infosec

2025-02-15

👍 Amazon Q Developer now supports upgrade to Java 21. In just a few steps, update applications to the latest supported Java versions, gain performance benefits, and remove vulnerabilities in unsupported versions. aws.amazon.com/about-aws/whats #aWS #Infosec

2025-02-15

🤔 The Benefits of the M&A Frenzy in Fraud Solutions - Emerging Vendors, Consolidation Drive Innovation in Fraud, AML, Scam Prevention. Scam prevention is an emerging industry with high growth as online financial scams grown in recent years. The Global Anti-Scam Alliance reported that scammers stole $1.03 trillion in 2024. The group says deepfake-related crime increased by more than 1,500% between 2022 and 2023. bankinfosecurity.com/benefits- #Infosec

2025-02-03

Impressive 🪄 Today we’re launching deep research in ChatGPT, a new agentic capability that conducts multi-step research on the internet for complex tasks. It accomplishes in tens of minutes what would take a human many hours. Deep research is OpenAI's next agent that can do work for you independently—you give it a prompt, and ChatGPT will find, analyze, and synthesize hundreds of online sources to create a comprehensive report at the level of a research analyst. Powered by a version of the upcoming OpenAI o3 model that’s optimized for web browsing and data analysis, it leverages reasoning to search, interpret, and analyze massive amounts of text, images, and PDFs on the internet, pivoting as needed in reaction to information it encounters. openai.com/index/introducing-d #MachineLearning #AI

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst