Dario Seyb

Research Scientist at Meta Reality Labs

PhD from Dartmouth College. Doing graphicsy things with computers. Alternative geometry representations are my jam.

Also, cycling (sometimes far).

2024-11-18

We have opened applications for a few additional research scientist intern positions next summer: metacareers.com/jobs/440476391

I'm specifically looking for someone with a strong graphics background who is also familiar with diffusion models!

2024-10-04

We're also hiring PhD interns for next summer/fall: metacareers.com/jobs/440476391

2024-10-04

Want to come up with (neural) graphics techniques for the next generation of mixed reality and AR? My team at Reality Labs Research is hiring research scientists!

metacareers.com/jobs/521058304

I only started six months ago, but so far the environment has been incredibly supportive and I've learned a lot from many people who really know what they are doing. There are hard problems to solve and the potential to change how things are done on a large scale!

Dario Seyb boosted:

OK here is my big announcement. For the past couple years I've been working on a rapid prototyping and development platform for real time rendering.
TL;DR - use nodes in a node graph to string together compute shaders, ray gen shaders, draw calls, etc. View it in a viewer that supports hot reloading. When you are done, generate code that would pass a code review. (Currently only DX12 code gen is public. More coming in future!)
github.com/electronicarts/gigi

A node graph editor showing a technique made in Gigi, made up of a few resource nodes, a draw call and several subgraph nodes, which are each their own graph internally.The Gigi viewer showing a technique being run. The steps of the technique can be seen visually in the panel on the right, a single image is viewed more directly in a center panel, technique parameters are shown on the upper left, and the profiling information is shown in the lower left.Code generated from the Gigi compiler, which shows it makes code that is well commented, uses human readable variable names, and would pass a reasonable code review.
2024-07-25

Somewhat related: With the current COVID wave, I expect SIGGRAPH to be an absolute shit show. infection wise.
That's why I decided to get it a week early (for the first time!) and be freshly recovered just in time to practice my talk...

2024-07-25

I'll be at SIGGRAPH next week, let me know if you want to meet up and chat!

2024-06-19

@BartWronski @wjarosz Thank you! It means a lot to me that this kind of work is appreciated, even though it's lacking "practicality".

2024-06-19

@bitinn That's no coincidence! I got to work with Alex Evans in the early stages of this project.
I'm excited to see if future work manages to make this more practical, but the techniques used in Dreams are probably just as good (and much faster) if you care mostly about artistic expression!

2024-06-19

It's super exciting to see that the community appreciates this type of work, and we will receive a Best Paper award at SIGGRAPH this year! blog.siggraph.org/2024/06/sigg

2024-06-19

Still, I'm very excited about the theoretical insights and connections we made, and I think it's incredibly useful to be able to reason about stochastic geometry in a principled way. After all, whenever you are dealing with the real world, there is some uncertainty involved!

2024-06-19

Our method is still far from "practical" - rendering an SIS is orders of magnitude slower than traditional VPT - and we found that for smooth processes, there are no closed-form solutions for many of the quantities we need (e.g. transmittance).

2024-06-19

Coming back to more practical examples, we show how to use non-stationary random fields for a variety of use cases, such as artistic design, visualizing results of stochastic surface reconstruction (cc @sellan_s!), and level of detail!

2024-06-19

And represent a wide range of appearances in the same “geometry primitive”.

2024-06-19

We can use non-stationary processes to smoothly interpolate between different flavors of stochastic geometry.

2024-06-19

(This reveals a pet peeve of mine: The Smith approximation to shadowing and masking does *not* just assume uncorrelated microfacet heights and slopes as is often stated, it also computes a modified survival probability in a random field!)

2024-06-19

This makes it clear that shadowing and masking are just ensemble average transmittance terms!
These connections let us reason about which processes we can use to represent geometry - masking is only non-zero in smooth processes (a "Brownian surface" wouldn't reflect any light!)

2024-06-19

It turns out that microfacet surfaces are already described via SISes! Additionally, we find strong connections between stochastic process and volume rendering concepts - for example, first-passage times map neatly to free-flight distance distributions.

2024-06-19

Naively sampling values on a grid covering the whole scene is intractable - think of it like sampling a normal for each microfacet in your scene or the position of each water droplet in a cloud!

We show that you can instead sample values along ray segments "on demand." This makes rendering SISes tractable (though not "fast") and lets us explore their appearance space.

2024-06-19

A stochastic implicit surface is the distribution of level sets sampled from a random field. We compute ensemble average light transport over that distribution.
Notably, that's very different from computing light transport over the average level set!

2024-06-19

I'm excited to share our #SIGGRAPH2024 paper on unifying stochastic representations for light transport: cs.dartmouth.edu/~wjarosz/publ

We show how to use stochastic implicit surfaces (SISes) to describe microfacet surfaces and participating media (and a bunch of new things in between).

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst