#engineer

N-gated Hacker Newsngate
2025-05-28

A paper on AR-Diffusion pretends to revolutionize text generation with a model so "advanced" it can't even generate its own job listing. 🔄🤖 Meanwhile, arXiv's relentless quest for a continues as they remind us that "open science" is important—especially if you can keep their site from crashing. 💻🛠️
arxiv.org/abs/2305.09515

2025-05-28

♻️Less than 10% of Global Plastics manufactured from recycled Materials.

The findings, reported in Communications Earth & Environment, are part of a comprehensive analysis of the global plastics sector, which also reveals a large increase in the amount of plastic being disposed of by incineration and substantial regional differences in plastic consumption.

nature.com/articles/s43247-025

#plastic #pollution #recycling #chemistry #science #engineer #media #nature #tech #news

Plastic production has increased from 2 million tons per year in 1950 to 400 million tons per year in 2022 and is projected to reach 800 million tons per year by 2050. As a result, plastic pollution is a pressing and growing global issue, posing major challenges for the environment, economy and public health. However, there is currently little comprehensive analysis of the contemporary global plastics sector.

Quanyin Tan and colleagues conducted an analysis of the global plastics sector in 2022, using data from national statistics, industry reports and international databases to produce a detailed global and regional overview of plastic production, use, and disposal. Data from their analysis highlights key trends in the global plastic supply chain.

The authors say the study provides important data for devising future policies and regulations.[ImageSource: Communications Earth & Environment (2025). DOI: 10.1038/s43247-025-02169-5]

Global plastic cycles in 2022.

Of the 400 million tons of plastic produced over the year, just under 38 million tons [9.5%] were produced from recycled plastic. Some 98% of the remaining 362 million tons were produced from fossil fuels, predominantly coal and oil.

Around 268 million tons of plastic was disposed of over the year, with only 27.9% sent for sorting and potential recycling [36.2% was instead sent directly to landfill and 22.2% was sent directly to incineration]. Additionally, only half of the sorted plastic was actually recycled, with 41% of the sorted plastic instead incinerated and 8.4% sent to landfill.

However, the total percentage of global plastic waste sent to landfill in 2022 (40%) has still decreased significantly compared to the estimated 79% of all global plastic waste sent to landfill between 1950 and 2015. The U.S. had the highest per capita plastic consumption, with an average of 216 kg of plastic consumed per person per year, while China consumed the most plastic overall [80 million tons per year].

<https://dx.doi.org/10.1038/s43247-025-02169-5>
BCWHSBCWHS
2025-05-27

Wade Bachelder is a ColdFusion Application Developer, Systems Security Engineer, G.R.C. Analyst, and Consultant.
wadebach.blackcatwhitehatsecur

ColdFusion Application Developer Systems Security Engineer GRC Analyst Consultant
Wade Bachelder is a ColdFusion Application Developer, Systems Security Engineer, G.R.C. Analyst, and Consultant.
Markus Schlichtingmadmas@jit.social
2025-05-27

Very happy that Bruno Souza (java.mn/about/) gave @karakun a visit today after @martinfrancois introduced us during @jcon earlier this month. In case you are in Zürich or Bern area, I strongly recommend to join the @jugch sessions he is joining there today and tomorrow! see jug.ch/html/events/2025/art-of and jug.ch/eventpreview.php?id=955
have fun!

#developer #engineer #career #growth #opensource #community #java

2025-05-27

Vibe Coding Company says Claude 4 reduced Syntax Errors by 25% and made it faster by 40%.

On May 22, Anthropic started rolling out two new models: Claude Sonnet 4 and Claude Opus 4. While Sonnet is available for free users, Opus requires a paid subscription and is able to do better than Sonnet when it comes to coding.

🖇️Check my Image Description’s🖇️

anthropic.com/news/claude-4

#anthropic #claude4 #vibecoding #llm #ai #programming #engineer #media #developer #tech #news

Lovable, which is a Vibe coding tool, says Claude 4 has reduced its errors by 25% and made it faster by 40%.

<https://x.com/lovable_dev/status/1926675103808385453>

Claude models have always maintained the reputation of "best at coding", but there has been steep competition from Google lately, which released Gemini 2.5 Pro with a 1 million context window.

Compared to the 200,000 context window of Claude 4 or older models, the 1 million context window for Gemini 2.5 does give it an advantage. But it doesn't necessarily mean Gemini 2.5 is better than Claude 4 in coding.

👾Both can be surprisingly brilliant and also terrible at the same time, and it also comes down to how you do prompt engineering. It's always nice to mix the models, such as o3 or Gemini for planning and Claude 4 and Gemini for coding.👾[ImageSource: Anthropic.com]

In a blog post, Anthropic confirmed that Claude Opus 4 scored 72.5 percent in SWE-bench [SWE is short for Software Engineering Benchmark].

<https://www.anthropic.com/news/claude-4>

In the tests, Opus 4 delivered sustained performance on long-running tasks that require focused effort and thousands of steps. Anthropic also claimed that its newest model worked on the code for seven hours straight.[ImageSource: Lovable AI]

Claude 4 reduced syntax errors by 25% on Lovable AI.

Vibe coding company Lovable, which uses Claude in its "AI-powered prompt-based web and apps builder" tool, has observed significant improvements after upgrading to Claude 4.

In a post on X, Lovable says it has 25% less errors and be 40% faster overall after deploying Claude 4 for both project creation and edits on all projects [including old projects].

<https://x.com/lovable_dev/status/1926675103808385453>

In a separate post, Lovable founder Anton Osika confirmed that "Claude 4 just erased most of Lovable's errors" while specifically referring to LLM syntax errors when vibe coding.

<https://x.com/antonosika/status/1926719161935233139>
2025-05-26

Beware: TikTok Videos now push Infostealer Malware in ClickFix Attacks.

The threat actors behind this TikTok social engineering campaign are using videos likely generated using AI that ask viewers to run commands claiming to activate Windows and Microsoft Office, as well as premium features in various legitimate software like CapCut & Spotify.

trendmicro.com/en_us/research/

#tiktok #socialmedia #it #security #privacy #engineer #media #tech #news

[ImageSource: Trend Micro]

TikTok ClickFix Video

One of the videos claiming to provide instructions on how to "boost your Spotify experience instantly," has reached almost 500,000 views, with over 20,000 likes and more than 100 comments.

"This attack uses videos [possibly AI-generated] to instruct users to execute PowerShell commands, which are disguised as software activation steps. TikTok's algorithmic reach increases the likelihood of widespread exposure, with one video reaching more than half a million views," Trend Micro said. "The videos are highly similar, with only minor differences in camera angles and the download URLs used by PowerShell to fetch the payload."

"These suggest that the videos were likely created through automation. The instructional voice also appears AI-generated, reinforcing the likelihood that AI tools are being used to produce these videos," it added.

⚠️Beware: The malware was pushed through videos that received over a million views shortly after being posted and can steal Discord accounts, passwords, credit cards and cryptocurrency wallets.⚠️[ImageSource: Trend Micro]

Attack Flow

• ​In this attack uses videos, the criminals prompt viewers to run a PowerShell command that will instead download and execute a remote script from hxxps://allaivo[.]me/spotify that installs Vidar or StealC information-stealing malware, launching it as a hidden process with elevated permissions.

• After being deployed, Vidar can take desktop screenshots and steal credentials, credit cards, cookies, cryptocurrency wallets, text files and Authy 2FA authenticator databases.

• Stealc can also harvest a wide range of sensitive information from infected computers as it targets dozens of web browsers and cryptocurrency wallets.

• After the device is compromised, the script will download a second PowerShell script payload from hxxps://amssh[.]co/script[.]ps1 that will add a registry key to launch at startup automatically.
2025-05-25

Can it run Llama 2? Now DOS can.

Will a 486 run Crysis? No, of course not. Will it run a large language model [LLM]? Given the huge buildout of compute power to do just that, many people would scoff at the very notion. But [Yeo Kheng Meng] is not many people.

yeokhengmeng.com/2025/04/llama

#msdos #llm #llama2 #artificialintelligence #retrocomputing #engineer #media #retro #programming #tech #ai #news

[Yeo Kheng Meng] has set up various DOS computers to run a stripped down version of the Llama 2 LLM, originally from Meta. More specifically, [Yeo Kheng Meng] is implementing [Andreq Karpathy]’s Llama2.c library [running on Windows 98].

<https://youtu.be/4241obgG_QI>

Llama2.c is a wonderful bit of programming that lets one inference a trained Llama2 model in only seven hundred lines of C. It it is seven hundred lines of modern C, however, so porting to DOS 6.22 and the outdated i386 architecture took some doing. [Yeo Kheng Meng] documents that work, and benchmarks a few retrocomputers. As painful as it may be to say — yes, a 486 or a Pentium 1 can now be counted as “retro”.[ImageSource: Yeo Kheng Meng]

The models are not large, of course, with TinyStories-trained 260 kB model churning out a blistering 2.08 tokens per second on a generic 486 box.

<https://github.com/yeokm1/dosllam2>

Newer machines can run larger models faster, of course. Ironically a Pentium M Thinkpad T24 [was that really 21 years ago?] is able to run a larger 110 Mb model faster than [Yeo Kheng Meng]’s modern Ryzen 5 desktop. Not because the Pentium M is going blazing fast, mind you, but because a memory allocation error prevented that model from running on the modern CPU. Slow and steady finishes the race, it seems.

This port will run on any 32-bit i386 hardware, which leaves the 16-bit regime as the next challenge. If one of you can get an Llama 2 hosted locally on an 286 or a 68000-based machine, then we may have to stop asking “Does it run DOOM?” and start asking “Will it run an LLM?”
2025-05-24

“Glasses” that transcribe Text to Audio.

Glasses for the blind might sound like an odd idea, given the traditional purpose of glasses and the issue of vision impairment. However, [Akhil Nagori] built these glasses with an alternate purpose in mind. They’re not really for seeing. Instead, they’re outfitted with hardware to capture text & read it aloud.

instructables.com/Vision-Glass

#diy #smart #glasses #maker #engineer #media #tech #art #news

It’s funny to think about how advanced this project really is. Jump back to the dawn of the microcomputer era, and such a device would have been a total flight of fancy — something a researcher might make a PhD and career out of. Indeed, OCR and speech synthesis alone were challenge enough.

Today, you can stand on the shoulders of giants and include such mighty capability in a homebrewed device that cost less than $50 to assemble. It’s a neat project, too, and one that I’m sure taught [Akhil] many valuable skills along the way.[ImageSource: Akhil Nagori]

Yes, we’re talking about real-time text-to-audio transcription, built into a head-worn format. The hardware is pretty straightforward: a Raspberry Pi Zero 2W runs off a battery and is outfitted with the usual first-party camera. The camera is mounted on a set of eyeglass frames so that it points at whatever the wearer might be “looking” at.

At the push of a button, the camera captures an image, and then passes it to an API which does the optical character recognition. The text can then be passed to a speech synthesizer so it can be read aloud to the wearer.
salathehugosalathesalathehugosalathe
2025-05-24
2025-05-23

:blobcatgamer: You can now play DOOM in a standalone Microsoft Word Document.

The single 6.6MB document file [available via GitHub], contains a source port of doomgeneric. Users will need a modern version of Microsoft Office/Word on an x86 computer system, and eschew security warnings, to enable the VBA macro in the document to run.

github.com/wojciech-graj/doom-

#microsoft #word #document #doom #port #retro #gaming #art #engineer #media #programming #artist #tech #news

[ImageSource: Wojciech Graj]

The game seems to run quite smoothly. However, in the background, "Every game tick, doomgeneric.dll creates a bmp image containing the current frame and uses GetAsyncKeyState to read the keyboard state," notes Graj. Perhaps this is why the viewport is quite small [original 320 x 200 pixels?] — to keep the game responsive.

WordDoom gamers can use their arrow keys for movement, Control key for fire, Space key for use, and number keys 1-7 for weapon selection. Graj highlights that there is no sound in this game release.
2025-05-22

Scotty telling it like it is 🖖😎

#StarTrek #TOS #MontgomeryScott #Scotty #Engineer

2025-05-22

:omya_zoom: Zoom fixes High-Risk Flaw in latest Update.

Zoom fixes multiple security bugs in Workplace Apps, including a high-risk flaw. Users are urged to update to the latest version. For anyone using Zoom in business or education settings, especially on Windows systems, these updates are worth attention.

zoom.com/en/trust/security-bul

#zoom #update #video #chat #security #privacy #engineer #media #it #tech #news

Zoom pushed out a batch of security fixes, addressing multiple vulnerabilities across its Workplace Apps. One of them has been marked high severity, while the others are rated medium. The updates affect both general app versions and Windows-specific builds.

The most significant of the bunch is a time-of-check to time-of-use [TOCTOU] issue listed under [CVE-2025-30663]. This type of bug occurs when there’s a delay between a system checking if an action is safe and performing it. During that short window, attackers might interfere. This bug affects Zoom Workplace Apps broadly and was rated high severity.

The rest of the vulnerabilities carry medium severity ratings. Here’s a quick breakdown:

• Affects: All Workplace Apps
• CVEs: CVE-2025-46786, CVE-2025-46787, CVE-2025-30664
• Issue: These bugs involve the mishandling of user inputs, which could allow scripts or commands to be executed in unexpected ways.

• Affects: Windows versions
• CVE: CVE-2025-46785
• Issue: This bug could lead to the application reading more data than it should, risking exposure of sensitive information.

All seven bulletins are published on Zoom’s official security bulletin page.[ImageSource: Shutterstock]

In a comment, [Jim Routh] Chief Trust Officer at Saviynt stated, “Cyber professionals are considering the need for deepfake detection and prevention impacting virtual meetings today. It turns out that the software defects/vulnerabilities announced recently in Zoom Workplace are far more critical at this time.”

<https://www.linkedin.com/in/jmrouth>

”DoS and remote code execution vulnerabilities have the potential for significant business disruption with the potential for ransomware exploits,” he added. ”Software resilience for enterprise software companies is achievable with more maturity in the development process to identify and remediate race conditions.”

Zoom is widely used across industries, and bugs like these mixed with others, can be a massive security risk. While the technical details may not apply to everyday users, IT teams should treat this as a routine security maintenance window. Applying the patches quickly reduces the chance of these issues being exploited.

⚠️Therefore, if you use Zoom Workplace Apps, update now. The patches are live and available for download. Admins managing enterprise deployments should review their update pipelines to make sure these fixes are rolled out across all user endpoints.⚠️
Søren Kjærsgaardoz1lqo@techhub.social
2025-05-21

Yesterday was #worldmetrologyday and thus, a perfect occasion to do the monthly statistical readout of my four 10V references, based on the popular LM399.

Two of them arrived January 15 and has been powered since. A couple of months later, March 22, I added two more. The idea is, that the standard deviation (noise) of the 10V output should improve with time, lots of time 😂

Well, that is in fact what I see. Three of them are now at 1.15uV, 1.17uV and 1.45uV.
The fourth however, started out really high, 81uV 😳, but has since improved to 29uV in just two months.

I plan to keep doing this, - Let’s see what the rest of the year brings 🙂🙂

Long term plan: a 10V calibration transfer 👍🏼

#testandmeasurement #engineer #electronicsengineering #electronics

2025-05-21

NASA keeps ancient Voyager 1 Spacecraft alive with Hail Mary Thruster Fix. [Before DSN Command Pause]

NASA has revived a set of thrusters on the nearly 50-year-old Voyager 1 spacecraft after declaring them inoperable over two decades ago. [Voyager 1, launched in 1977, are now traveling through interstellar space at around 56,000 kph]

jpl.nasa.gov/news/nasas-voyage

#nasa #voyager1 #space #science #engineer #media #tech #news

It's a nice long-distance engineering win for the team at NASA's Jet Propulsion Laboratory, responsible for keeping the venerable Voyager spacecraft flying — and a critical one at that, as clogging fuel lines threatened to derail the backup thrusters currently in use. 

The things you have to deal with when your spacecraft is operating more than four decades beyond its original mission plan, eh? Voyager 1 launched in 1977.

JPL reported that the maneuver, completed in March, restarted Voyager 1's primary roll thrusters, which are used to keep the spacecraft aligned with a tracking star. That guide star helps keep its high-gain antenna aimed at Earth, now over 15.6 billion miles away, and far beyond the reach of any telescope.

"It was such a glorious moment. Team morale was very high that day," said Todd Barber, the mission's propulsion lead at JPL. "These thrusters were considered dead. And that was a legitimate conclusion. It's just that one of our engineers had this insight that maybe there was this other possible cause and it was fixable. It was yet another miracle save for Voyager."

Those primary roll thrusters stopped working in 2004 after a pair of internal heaters lost power. Voyager engineers long believed they were broken and unfixable. The backup roll thrusters in use are now at risk due to residue buildup in their fuel lines, which could cause failure as early as this fall.
Hacker Newsh4ckernews
2025-05-21
2025-05-20

Exciting job alert — @arXiv is hiring a DevOps Engineer!

info.arxiv.org/hiring/index.ht

#hiring #DevOps #Engineer #job

2025-05-20

Google paid $12 Million in Bug Bounties last Year

Google revamped the Vulnerability Reward Program [VRP] reward structure, bumping rewards up to a maximum of $151,515, while its Mobile VRP now offers up to $300,000 for critical vulnerabilities in top-tier apps [with a maximum reward reaching $450,000 for exceptional quality reports].

security.googleblog.com/2025/0

#google #bugbounty #rewards #it #security #privacy #engineer #media #tech #news

[ImageSource: Google]

Google VRP bug bounty rewards paid since 2019.

In 2024, Google awarded $3.4 million to 137 Chrome VRP researchers after analyzing 137 reports of valid Chrome security bugs. The company also paid over $3.3 million to researchers who reported security bugs through the company's Android and Google Devices Security Reward Program and the Google Mobile Vulnerability Reward Program.

"In 2025, we will be celebrating 15 years of VRP at Google, during which we have remained fully committed to fostering collaboration, innovation, and transparency with the security community, and will continue to do so in the future," Google said. "Our goal remains to stay ahead of emerging threats, adapt to evolving technologies, and continue to strengthen the security posture of Google's products and services."

💵The company says it awarded $65 million in bug bounties since its first vulnerability reward program went live in 2010, while the highest reward paid last year was over $110,000.💵
2025-05-19

Bluetooth 6.1 enhances Privacy.

The Bluetooth Special Interest Group [SIG] has announced Bluetooth Core Specification 6.1, bringing important improvements to the popular wireless communication protocol. One new feature is the increased device privacy via randomized Resolvable Private Addresses [RPA] updates.

bluetooth.com/blog/delivering-

#bluetooth #rpa #timing #it #security #privacy #engineer #media #tech #news

A Resolvable Private Address (RPA) is a Bluetooth address created to look random and is used in place of a device's fixed MAC address to protect user privacy. It allows trusted devices to securely reconnect without revealing their true identity. Currently, RPAs are updated at fixed intervals, usually every 15 minutes, which introduces a level of predictability. This predictability can be exploited in correlation attacks, making long-term tracking possible.

Bluetooth 6.1 improves privacy by randomizing the RPA updates between 8 and 15 minutes [default], while also allowing custom values between the range of 1 second to 1 hour.

<https://files.bluetooth.com/download/core_v6-1/>

⚠️"Randomizing the timing of address changes makes it much more difficult for third parties to track or correlate device activity over time," reads SIG's announcement.⚠️

Another feature is better power efficiency starting from Bluetooth 6.1, which stems from allowing the chip [Controller] to autonomously handle the randomized RPA updates. Specifically, the Bluetooth chip will choose the randomized timing intervals and generate and update the RPA internally without waking the host device.

👾This saves CPU cycles and memory operations, so much power is saved when conditions are met. For smaller devices like fitness bands, earbuds and IoT sensors, this could make a big difference in battery life.👾
2025-05-18

LegoGPT is here to make your blocky Dreams come True — Now available for free to the Public.

A research team from Carnegie Mellon University built an AI model called LegoGPT that outputs valid LEGO designs from text inputs. The AI was trained on a dataset with more than 47,000 LEGO structures that build over 28,000 unique 3D objects [cars, ships, guitars and more].

avalovelace1.github.io/LegoGPT

#legogpt #artificialintelligence #lego #art #it #engineer #ai #artist #media #tech #news

According to the team’s research paper that's posted on GitHub, they trained “an autoregressive large language model to predict the next brick to add via next-token prediction,” but the key takeway is that the AI LLM creates LEGO designs from scratch.

The tool is available for free on GitHub, and you can pair this with a computer vision model or image processing AI. For example, you can take a photo of your available LEGO bricks and let the AI give you a multitude of unique options for building with what you already have.

How it creates a new design through text:

LegoGPT converts the text into a LEGO design, which is then converted into text tokens ordered from bottom to top. Instructions are then created to pair the structured LEGO bricks with annotations explaining the design, so that the AI will understand the relationships between the text prompt and the physical bricks.[ImageSource: Pun, Deng, Liu, Ramanan, Liu, Zhu / Carnegie Mellon University]

The team added a validity check and physics-aware rollback during autoregressive inference, ensuring that the final output will always be valid [i.e., no overlapping bricks] and stable [i.e., no floating bricks]. Furthermore, LegoGPT’s final output can be built by both humans and robots.

<https://arxiv.org/pdf/2505.05469>

The team created the dataset — StableText2Lego — used to train LegoGPT: a text prompt input is first converted into a ShapeNetCore mesh. This is then plugged into a 20 x 20 x 20 voxel grid from which the initial LEGO brick layout is determined.

<https://huggingface.co/datasets/ShapeNet/ShapeNetCore>

This layout is then varied while still keeping the overall shape, and then unstable designs are filtered out from the final output. Those left are then rendered in 24 different viewpoints, and then GPT-4o is used to generate descriptions for the final output.[ImageSource: Carnegie Mellon University]

The most interesting example is when they feed robot arms the designs and let them make the resulting design. From text to LEGO with no human intervention! Sounds like something from a bad movie.

If you want to play with the AI yourself, the team released its dataset, code and models, making it easier for anyone to fork the team’s work.

<https://github.com/AvaLovelace1/LegoGPT/>
<https://huggingface.co/spaces/cmu-gil/LegoGPT-Demo>

🫥Lego’s actual comment: "We're unable to comment at this time."🫥

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst