Image of a texture atlas for a 3D model of a cat girl. It's a mess.
Roast my texture atlas.
I did eventually grasp how to manipulate UVs to make stuff look crisp and not too distorted.
Image of a texture atlas for a 3D model of a cat girl. It's a mess.
Roast my texture atlas.
I did eventually grasp how to manipulate UVs to make stuff look crisp and not too distorted.
The shoes look garbage, but I'm happy with the rest. Next I'm gonna animate her.
Let's try again with a GIF
This was supposed to be animated (spinning). I converted the rendered video to webp and it was working in Bluesky's preview. But now it's now ๐ค
Spinning head of a low-poly anime-style cat-girl with dark purple hair.
I'm learning 3d modelling :3
Turns out all the graphics programming I've learned in the past is pretty useless if you can't make your own models.
Screenshot of a GitHub issue. gemini-cli (bot) is replying to itself (content not important). The conversation chain is interrupted with "5200 remaining items" and a "Load more" button.
> 5200 remaining items
๐ซ
Stanford Bunny rendered with a texture of a test card.
I got textures to work.
At the moment I want to try some other new wgpu features - unrelated to this software renderer. But I'll try to work on this regularly, as I want to eventually make a proper release.
Not sure if this will turn out to be a viable rendering backend. But it's a good learning experience and I think the software renderer or the JIT compiler alone can be great for testing and debugging (e.g. you can set breakpoints in your shaders).
Performance: The teapot rendering took 20 ms (release build on a i7-8565U), but the fragment shader for it is very simple. Filling pixels takes the most time, so I need to try more complex shading to get a more representative result. I have a PBR shader lying around that I want to try at some point.
Textures will be next. They already exist (we're rendering to one), but the shader runtime needs to expose a few builtin functions to allow the shader program to sample from them.
A list of what works right now: depth buffers, vertex buffers, index buffers, clipping, culling, point/line/tri topology as either list or strip, interstage variables with no or linear interpolation, uniform buffers, surfaces.
Then I picked up cranelift and wrote a JIT compiler to compile naga IR to x86_64 (or aarch64 and a few others). It was a lot of work but relatively straightforward. There's still a lot of language features missing and I want to refactor SIMD support, but overall I'm happy with it.
Of course there's the little detail of actually running WGSL shader programs. Fortunately there's naga, which is a shader language IR and transpiler.
My first attempt was to interpret the naga IR, but it turned out to be not very elegant and probably way too slow.
As mentioned my aim was to make it compatible with wgpu, so you can just use it like any other wgpu backend.
I'd almost call it straightforward to implement: You just write a triangle/line rasterizer and bam. Well almost.
A 3D rendered image of the Utah teapot. The teapot is shaded in a red/green/blue gradient. The background is black.
Haven't posted here in a while, but been busy programming. One thing lead to another and I'm currently working on a software renderer backend for wgpu.
This is a render of the classic Utah teapot. At the moment I'm working on support for textures.
So SSTV stands for Slow Scan Television, which is a mode to transmit still images via radio, used by amateur radio operators. It encodes the pixels and a header using frequency modulation, meaning the signals in the picture are just detections of tones of different frequencies.
4 plots showing pulses: 1: captioned "Leader", in red, with two long pulses at the start 2: captioned "Sync", in blue, with a short pulse between the leader pulses, and two pulses later 3: captioned "Vis Low", with some pulses later 4: captioned "Vis High", with some pulses later, complementary to "Vis Low".
I'm still working on my software-defined radio stuff in Rust. Yesterday I began working on an SSTV decoder. It took me soooo long to just get clear detection of the pulses used in the header. And mind you, this is from a very clean signal I generated myself.
AM radio is really simple. All you gotta do is multiply the radio signal with e^(-i*2*pi*f*t) to shift the frequency of the station to 0. Then lowpass filter (remove high unwanted frequencies) and decimate (drop samples to downsample to audio sample rate). That's it! Multiply and average.
I remembered that I have a book about signal processing! With it, wikipedia and youtube all this stuff starts to make a lot of sense now!
I just got a crude AM radio receiver working ๐ป ๐ถ
It does make sense now. The FFT outputs the complex amplitudes for frequencies f = omega / (2 * pi * T). So for the right half the complex amplitudes are just rotating with >pi/T in the opposite direction. So those are the frequencies left of the center frequency.
Oh my goddess! I just realized the big mistake that made it look all wrong. When you do FFT you'll get first the right and then the left half of the spectrum.
The rustfft doc clearly states that it would be ordered with ascending frequency.
But my output matches SDR++ now ๐
Fortunately my rust replacement for rtl_tcp allows multiple client connections, so I can connect my TUI app and SDR++ to it at the same time and compare them.
Unfortunately they don't match that well. Some of my math isn't mathing correctly โน๏ธ
Cropped screenshot of the same program. Again some vertical blue an pink stripes are visible. Prominently in the center there's text: "x-[7.1 MHz ยฑ 5 kHz]"
I've added a little indicator that displays the frequency of your mouse position.
If I'm not mistaken (I'd have to look with a proper app like SDR++) this red stripe at 7.1 MHz is that russian over-the-horizon radar that is always on at night at that frequency.
Hmm, stupid me. If I only need low-resolution spectra, I can just downsample the input signal, no?
I also want to do the FFT on larger buffers to increase resolution in frequency space. For one I could then display smaller sections of the spectrum. But I also want to be able to select small frequency bands and demodulate them.
I think it's slightly better to do the accumulation by FFTing a larger buffer and then downsampling the spectrum to terminal width. The result should be the same, but a FFT is O(n log n). So right now I have a smaller n, but do it more often - thus not benefiting from the log as much.
So this basically takes terminal-width number of samples, FFTs them, accumulates this into a line over 500ms and then renders it to the top of the screen, pushing the old lines down.
Screenshot of a terminal waterfall display. It shows vertical lines in blue, pink and red. The center is dominated by a blue band. On the right is a bright red line. The rest is mostly pink, mixed with some blue lines.
I made a terminal program that shows radio signals from the RTL-SDR as a waterfall. So it basically shows the spectrum over time. More red -> stronger signal.
This is around 7 MHz
HAM radio websites are really a window into the 00s ๐
I wrote my own high-level bindings. They wrap the low-level C API and provide an async API with support for multiple consumers per IQ stream.
I believe it's not 100% safe the way I share the device handle between threads, but that's literally how the authors of librtlsdr use their library.