[Show GN: AI 이미지 스타일 코치 (연예인 닮은꼴)
AI 이미지 스타일 코치 서비스인 'Show GN'이 사진 한 장을 분석하여 사용자의 분위기와 이미지에 맞는 스타일을 추천해주는 서비스입니다. 연예인 닮은꼴 비교가 아닌, 분위기 기반의 스타일 코칭을 제공합니다.
[Show GN: AI 이미지 스타일 코치 (연예인 닮은꼴)
AI 이미지 스타일 코치 서비스인 'Show GN'이 사진 한 장을 분석하여 사용자의 분위기와 이미지에 맞는 스타일을 추천해주는 서비스입니다. 연예인 닮은꼴 비교가 아닌, 분위기 기반의 스타일 코칭을 제공합니다.
What would you align sets of multiple (~20) large (2-4 Gb) #microscopy images?
For smaller subset images ImageJ plugins for transformations based on SIFT landmark correspondence work well. However standard ImageJ (bioformats) file handling doesn’t cope well with such large files. For plugins handling large file manipulation (BigData family) or chunked (e.g. zarr) storage in turn I don’t know how to implement SIFT (or similar) - e.g. for BigWarp I can only find manual landmark annotation, i.e. no option to create landmarks via other plugins.
My images are iterative fluorescence whole slide scans of the same slide with a constant nuclear stain and varying other stains. There is some x/y shift and rotation as well as warping - nothing major, but I need nearly pixel perfect alignment (e.g. QuPath+Warpy worked well on larger images but was too imprecise).
Stitching happens on the fly during imaging and I’m not sure I can extract the tiles faithfully, so the ASHLAR pipeline didn’t seem applicable. I’ve seen VALIS recommended, but implementation seemed daunting and since the nuclear stain provides reasonable fiducial points the workflow seemed an overkill.
Ideally I would want a scripted solution as this has to scale up to hundreds of such sets eventually and downstream processing is in python+R anyhow.
Eerie… but then again context is everything. Google has access to a huge amount of information in the images and exif information if available. Correlating all of this across its huge user base provides possibilities we cannot even imagine.
These companies and their tools already know more of us than we know about ourselves. We are the product.
Ever realized why we need rules and regulations around privacy?
I have two open positions in my lab at the Advanced Light Microscopy Unit Centre for Genomic Regulation (CRG):
- Imaging Scientist (permanent position. Deadline 11th Nov.) https://recruitment.crg.eu/content/jobs/position/imaging-scientists-advanced-light-microscopy-unit-almu
- Entry-Level Imaging Scientist (12 months fixed-term position. Deadline 18th Nov.) https://recruitment.crg.eu/content/jobs/position/entry-level-imaging-scientist-advanced-light-microscopy-unit-almu
If you have any questions don’t hesitate to reach out.
Boosts appreciated.
#getfedihired #fedihire #jobSearch #jobposting #Microscopy #Optics #ImageAnalysis
Position open: Head of the Center for Microscopy and Image Analysis (Core Facility) at the Center for Microscopy and Image Analysis (ZMB) in the University of Zurich, Switzerland.
#microscopy #PhDJobs #academia #ElectronMicroscopy #LightMicroscopy #ImageAnalysis
Last week to subscribe for our FREE Webinar Series on "Mastering Colocalization Analysis"; from raw image to scientific results in minutes. Reserve your seat now!
http://svi.nl/webinarinvitation
#imaging #microscopy #cellbiology #fluorescence #imageanalysis #colocalization
Today our team member Anna Breger tells her story - “Many little twists and turns have brought me to where I am now and I am absolutely thrilled about my interdisciplinary research project working on image analysis and historical music manuscripts.”
➡️ Find her full story at https://hermathsstory.eu/anna-breger/
#AppliedMathematics #ImageAnalysis #Music #InterdisciplinaryResearch #NonTraditionalPathways #DataScience #HerMathsStory
New tutorial from the Galaxy Training Network! Learn how to quantify gel electrophoresis bands using QuPath + Galaxy. Fully open, reproducible, and beginner-friendly.
https://training.galaxyproject.org/training-material/news/2025/08/18/bands_image_analysis.html
@galaxyproject
#Bioinformatics #ImageAnalysis #GelElectrophoresis #OpenScience #ReproducibleResearch #GalaxyProject #QuPath #Bioimaging #LifeSciences #OpenSourceTools #ScienceEducation #DataAnalysis #ResearchTools
New tutorial from the Galaxy Training Network! Learn how to quantify gel electrophoresis bands using QuPath + Galaxy. Fully open, reproducible, and beginner-friendly.
https://training.galaxyproject.org/training-material/news/2025/08/18/bands_image_analysis.html
@galaxyfreiburg
#Bioinformatics #ImageAnalysis #GelElectrophoresis #OpenScience #ReproducibleResearch #GalaxyProject #QuPath #Bioimaging #LifeSciences #OpenSourceTools #ScienceEducation #DataAnalysis #ResearchTools
version 0.7-0 of my R package `bayesImageS' is now available on CRAN for Linux and macOS
(Windows binaries are still being built and should be available soon)
The main change is a reduction in the console output for the exchange algorithm. There were also some minor changes to fix WARN and NOTE due to compatibility issues with the latest RcppArmadillo, which now uses the
Armadillo 15 linear algebra library by default.
Tìm kiếm AI phân tích hình ảnh chạy trên Mac Mini M4? Thử các mô hình đa phương thức như Gemma 3 nhưng gặp lỗi? Cần gợi ý mô hình tối ưu cho Apple Silicon và Neural Engine. #AI #ImageAnalysis #MacM4 #TríTuệNhânTạo #PhânTíchẢnh
https://www.reddit.com/r/LocalLLaMA/comments/1no369c/whats_the_best_image_analysis_ai_i_can_run/
Transformer-Ensemble-Based Implicit Spectral–Spatial Functions for Arbitrary-Resolution Hyperspectral Pansharpening.
IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1-19, 2025, Art no. 5519519
https://doi.org/10.1109/TGRS.2025.3589021
#ai #transformers #imageanalysis
bsky https://bsky.app/profile/clirspec.org
Sept 25: Learn to track objects over time and instances. With Carsten Rother (@uniheidelberg).
Register for the series 👉 https://bit.ly/6-image-processing-tasks
Google URL context tool now supports PDF analysis and scaled production use: Google's URL context tool for Gemini API reaches production scale with PDF support, image analysis, and expanded content types for developers. https://ppc.land/google-url-context-tool-now-supports-pdf-analysis-and-scaled-production-use/ #Google #URLContextTool #GeminiAPI #PDFAnalysis #ImageAnalysis
Last week to apply to the Light-Sheet Image Analysis Workshop.
A five-day practical course on the processing and analysis of light-sheet microscopy imaging data. It will take place in Santiago, Chile, from January 5–9, 2026.
Deadline: August 8.
Learn more and apply here: https://lightsheetchile.cl/light-sheet-image-analysis-workshop-2026-2/
#Microscopy #Lightsheet #ImageProcessing #ImageAnalysis #LatinAmerica #GlobalSouth
AI: Explainable Enough
They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem.
Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details.
What the domain expert user doesn’t want:
– How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor.
What the domain expert desires:
– Help at the lowest level of detail that they care about.
– AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X.
Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.
This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem, easily. So in a Betty Crocker cake mix kind of way, let the user add the egg.
Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking.
I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.
#AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI
To wrap this up: Both tools are easy to test. I highly recommend trying them on your own data to see what works best for your use case.
I’ll include #CellSeg3D in our next #Napari #bioimage analysis course (https://www.fabriziomusacchio.com/teaching/teaching_bioimage_analysis/). Curious what impressions and feedback the students will share. 🧪🔍
What I really like about @napari is how well it integrates modern #Python tools. Great to have such a flexible, evolving #opensource platform for (bio) #imageanalysis! 👌
👏 Big congrats to Annika Reinke for winning the Hector Foundation Prize 2025 for Metrics Reloaded, setting new standards for AI in image analysis.
Learn more, explore the tool & meet all awardees in a video 👉 https://helmholtz-imaging.de/news/hector-foundation-prize-for-annika-reinke/
#helmholtz #helmholtzimaging #imaging #metrics #metricsreloaded #AI #imageanalysis
Day 3 at #HIconference2025 wrapped with exciting talks on #AI for #imageanalysis, data integration & moonshot projects.
A big thank you to all speakers, chairs & participants!
See you next year!
Following on from their successful meeting in 2023, @J_Cell_Sci are delighted to announce a second iteration of the Imaging Cell Dynamics Meeting to be held in 2026.
Find out more:
https://www.biologists.com/meetings/jcsimaging/
#Cells #CellDyamics #CellScience #Imaging #JCS #Microscopy #SuperResolutionImaging #ElectronMicroscopy #Tomography #ExpansionMicroscopy #OrganelleDynamics #MembraneTrafficking #CytoskeletalDynamics #TissueDynamics #ImageAnalysis #LightMicroscopy #Microscope