#AI_ML

Geeklandgeekland
2025-05-27
Todd A. Jacobs | Pragmatic Cybersecuritytodd_a_jacobs@infosec.exchange
2024-09-21

@hacks4pancakes You overestimate how many people in the USA even understand the gap in privacy protections between here and the EU. If you're on #LinkedIn, check out the current brouhaha about LinkedIn opting in all non-EU residents into their #AI_ML #dataaggregation and training without notice and before updating their terms of service.

In the US, most non-enterprise #TOS contracts of adhesion basically make it our responsibility to stay on top of the changes anyway; most of the time individuals aren't even provided advanced notice. Simply continuing to use the service in ignorance implies agreement with any new terms. In addition, those terms almost always say you agree that they can be changed at any time, with or without notice, at the service provider's sole discretion.

I've spent this entire week explaining to people why LinkedIn is allowed to do that to us but not to you; why they won't get more than a slap on the wrist, if that, from any oversight body; and why our current copyright laws and precedents around software licensing basically ensure that LinkedIn, #Microsoft, #Google, and #OpenAI will continue to get a free pass on literally stealing people's data and intellectual property, making it "proprietary" and sealing people's data up behind an impenetrable paywall, and then selling a slurry of appropriated data (including their own) back to them at the highest price the market will bear.

That's not free-market capitalism. It's just corporate welfare for large companies, institutional stockholders, and chip makers, plus a dash of good ol' fashioned "Robber Baron" economics.

Todd A. Jacobs | Pragmatic Cybersecuritytodd_a_jacobs@infosec.exchange
2024-05-18

Want to know the best-kept "secret" in #cybersecurity for avoiding a potential #databreach or putting #customerdata in harm's way? Every experienced #CIO and #CISO already knows it by heart because it's super simple: "Don't collect unnecessary data in the first place!"

Even if a product actually needs the data for legitimate reasons from the customers' point of view, they should still be informed of the alleged necessity first, and then asked for permission to collect and use the data. That ensures that customers have the opportunity to evaluate the sensitivity of the data involved, and determine for themselves what the the potential risks and rewards of sharing it might be. Collecting the data first and then expecting customers to believe that a vendor can or will honor a future opt-out request is just silly, especially in the modern age of giant data lakes, massive online redundancy, 100+ year shelf-lives for petabytes of off-site storage media, and sub-sub-sub data processors.

This is an extremely tone-deaf approach by #Salesforce to the current regulatory issues around mass data collection whether or not it's #AI_ML related. It is also unlikely that this policy complies with EU #privacyregulations or #AIgovernance laws. I'm neither a lawyer nor a party to any associated DPAs or NDAs related to this particular service, but if you're responsible for vendor selection, #regulatorycompliance, or #dataprivacy at your organization you need to go screenshot this before Salesforce tries to walk it back and pretend it never happened—leaving you holding the bag when your customers' data is inevitably exposed, of course.

help.salesforce.com/s/articleV

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst