PQA IPR has a FWD finding all challenged claims invalid.
So, that's both patents VLSI won its Intel verdict on invalidated. Discretionary denial is a complete policy failure, so why is the USPTO looking to make rules enshrining it into law?
Idiosyncratic views on patents. Policy counsel at CCIA and lead blogger for Patent Progress. Former litigator, big ole patent nerd.
All views my own.
PQA IPR has a FWD finding all challenged claims invalid.
So, that's both patents VLSI won its Intel verdict on invalidated. Discretionary denial is a complete policy failure, so why is the USPTO looking to make rules enshrining it into law?
While there are much much bigger problems with this AI hearing, I'd note that Sen. Blumenthal's attribution of "a lie can get halfway around the world" to Mark Twain is, well... misinformation. https://www.nytimes.com/2017/04/26/books/famous-misquotations.html
Interesting bit from Kagan's dissent in today's AWF/Goldsmith case: "For, let's be honest, artists don't create all on their own; they cannot do what they do without borrowing from or otherwise making use of the works of others. That is the way artistry of all kinds—visual, musical, literary—happens." Seems highly relevant to a week in which we've been hearing so much about AI.
More orders released re: Judge Newman and it’s, uh, not great.
@tokensane That’s reductionist. If the AI is designed to generate something that would have liability (e.g., a bit designed to generate defamatory statements about people) then the creator should bear liability for that design decision, though the person using the output would as well for the specific act of defamation.
(I'd add - Hi Blake! - that AI has huge potential benefits in disability, as well as disability needing to be considered when creating these systems in order to ensure that those users aren't blocked or disfavored from use.)
Also Sen. Padilla's question about an over-focus on English is 100% on point.
Montgomery making the simplest point, but also the most important one - AI is not a shield. Whether an AI tool or a human does something, it should be treated the same.
Hawley is just wrong here. If you're harmed by AI, and that harm is a legally cognizable harm, then yeah, you can sue.
I think Sen. Hirono's question about how the same AI that makes up a funny joke can also make up election disinfo illustrates an important point.
A smartphone camera can be used to record your kid's dance recital. It can also be used for copyright infringement. It's essential to keep both in mind, not just focus on the latter.
@blakereid Ah yeah definitely true. But the law ought to do that wrestling; we shouldn't let the last 20 years of not doing it stop us from doing it now.
AI songwriting is not a sin, says Neil Tennant of Pet Shop Boys https://www.theguardian.com/music/2023/may/16/ai-songwriting-is-not-a-sin-says-neil-tennant-of-pet-shop-boys?CMP=Share_iOSApp_Other
👀 WE'RE HIRING: ( That’s #CDT not Privacy Digest) 😀
* Deputy Director, Free Expression Project
* Policy Associate/Analyst, Equity in Civic Technology Project
Interested? Know someone who might be? RETWEET & TAG 'EM: #TechPolicyJobs #DCJobs #jobs
https://cdt.org/careers/
On the concentration discussion going on in the #SenateAIHearing right now - let's not forget that AI's lowering of the barrier to entry in many arenas can help reduce concentration in the economy writ large.
@blakereid Oh I mean that as a sui generis thing, not a direct application of 230; I don't think 230 quite fits. (I'm not sure if it applies to AI or not, haven't considered that question in the detail it probably deserves.)
Licensing works for drugs, since there's a limited amount of drugs to consider and huge known potential harms.
I'm unconvinced that model is good for AI, where the opposite is true in both cases.
Big shoutout to Mary Hannon, whose article attracted the notice of some Senators and helped get the ball rolling on #PatentBar reform.
You can read her piece (which she wrote as a law student) here:
I'm no fan of Montgomery, but I really wish these dudes would stop interrupting her.
#SenateAIHearing
@blakereid I think a lot of that work can (and should!) be done by existing law. Product liability law gets us a long way on defective designs; harms from output are going to be covered by a wide range of existing law, but in general I'm unconvinced we need a ton of new laws.
FWIW I'm skeptical an agency is the right solution to a wide-ranging technology (we don't have an agency for computing either). I think it hits on too many different areas; far more productive to have AI regulation from existing agencies in their individual areas of expertise.