itβs time to d-d-d-d-doomscroll
itβs time to d-d-d-d-doomscroll
if you subscribe to self-determination theory, video games fulfil the intrinsic psychological needs of competence, autonomy and relatedness, and books mostly donβt.
my current thinking is: tag audio volumes with βinteriorβ, then sound event fires-> get overlapping actors -> for each loop -> actor has tag βinteriorβ -> set bool -> set bool parameter in metasound.
is this sensible?
technical #gameaudio unreal engine folks: Iβm trying out pre-baked reverb tails for βinteriorβ and βexteriorβ locations. how would you go about getting this data from the player character into a metasound?
LLMs work by predicting the next word based on everything before it. but always choosing the most likely word makes the answers too formulaic, so they added randomness by sometimes choosing the 2nd or 3rd most likely word instead. this can have a knock-on effect that changes the whole answer.
september was my βscene old school emo post-hardcoreβ phase
did not appreciate being called out like this
my spotify 2024 wrapped
entirely unsurprising
does the physical manual have all the coffee stains and doodles like the in-game one?
hello!
we made it!!