최근 20~30대 청년층 사이에서 사적인 친목보다 공통의 목적에 집중하는 모임이 늘고 있습니다. 일종의 ‘느슨한 관계’를 전제하는 모임들인데, 남은 업무나 사진첩 정리 등 각자가 미뤄 둔 일을 함께 모여 처리하는 ‘어드민 나이트’도 그중 하나입니다.
최근 20~30대 청년층 사이에서 사적인 친목보다 공통의 목적에 집중하는 모임이 늘고 있습니다. 일종의 ‘느슨한 관계’를 전제하는 모임들인데, 남은 업무나 사진첩 정리 등 각자가 미뤄 둔 일을 함께 모여 처리하는 ‘어드민 나이트’도 그중 하나입니다.
ㅠㅠㅠㅠ 써보고 싶어도 못 쓰는 사람들이 있다는 건 너무 슬프네요 제발 15주년때 꼭 라이브러리 판매해주길...
Happy Hatsune Miku Day!
#ミクの日
Listen to zola project boys. They're also made from old data but there's still the base model doing the heavy lifting to make them sound best as they could do.
More like million epochs trained with a lot of, but only worth of 3 minutes of data. They definitely overtrained the model with old vocaloid renders, so she won't sound like anything but low-quality recreation of her old self.
youtu.be/ooieUOq0xXw?...
Miku V6 EA test I made back in christmas. Bluesky messed up with the audio :(
Anyway, her plosive consonants are often too short, sometimes /t/ and /k/ gets mixed up, while flaps being too long. not to mention how wat littered lazy one semitone low dip all over the place.
내가 스스로 다 만져야 하는 게 불만은 아니야
그게 재밌어서 조교하는 거니까
근데 재미라는 게 있으려면 만진 보람이라는 게 있어야 하고 차이랑 새로운 표현이 있어야 하잖아
그게 없으니까 그냥 무겁고 아마도 정출가가 좀 더 비쌀 V4x으로 뺑이치기가 되서 화가 나는 거야
What
From what I heard they use NT for voice lines(VA's reference audio got leaked once)and that's their excuse for not giving virtual singers proper voice lines
Really strange because Saki Fujita fully voiced Miku for dropkick in my devil and cfm never gafed about calling it Miku
They're literally letting anyone else voice miku and convert it for their official uses but not one person who voices Miku. Shame.
Did you mean Ms. Fujita's talking or Pjsk OC?
First we had 'pirated versions make it accessible and you're elitist for calling pirates out' and now we got this?? Thefting isn't 'making things accesible'
NT is the worst — in short it's V1 but in 2026 made by people who doesn't know what they're doing.
They doesn't even use samples anymore in NT. They're recreating vocals by just matching the frequency from the audio sample. Of course it's not even 80% accurate because of the optimization issue
I'm overall very frustrated about how CFM treats the value of acting and voicing the character. They don't even seem to believe that HUMANS CAN VOICE CHARACTERS, and stuck in a delusion that human voicing would ruin the piapro characters and they'll be just replicas of their VP.
Pjsk talkloids are first dubbed by other VAs(Human OC's VA) then they trace it with NT. The last case is even more funny cuz at that point it's not even her original VP. And somehow, getting Saki Fujita back and letting HER sing as Miku is 'Not-miku just Saki Fujita replica'
This too. CFM is so... flippant about using human voicing to produce Miku. First we had appends. Various style and each sounding distinct from original V2. They even made in-house voice converter for talkloids in magical mirai. They got Saki Fujita to voice Miku in anime —
Cuz most of the time, when the provided data doesn't contain enough voicing information, V6AI's base model would fill the gap. Like those RVC videos in youtube. The only way to make it stop is just bombarding the model with old renders until even the base model can't interfere.
YOU KNOW WHAT? Reusing spliced samples likely cost even more and time-consuming than just paying Ms. Fujita to sing. They have to render the phrases, draw pitchbend all over, and Idk how much, but they have to put much more data compared to normal AI VBs to make her sound this souless and artificial
wat's lazily produced artificial pitchbends, pronunciations from spliced samples being put together, no aspirations whatsoever, engine noise amplified from using renders for training and overall... just spliced samples put together by audio engineers?
In fact, that would be the 'mikuest' miku we could get in 2026. They say they want to preserve Miku, but how that is preserving if you don't have the way she hits the right pitch, pronounce each syllables, control aspirations etc for 'Miku' voice and instead fill that place with —
Una V6, surprisingly, was made from VP's acapella too. They aimed for similar tone and the post-processing wrecked things up a bit.
And I agree! Raw data might not *completely* sound like Miku but that's why we got recording engineers for — to cooperate with Ms Fujita to bring out Miku
'We want to preserve Miku's origianl tone'
'We optimize the benefit of V6 AI engine'
While doing anything but letting Saki Fujita voice Miku again. Make it make sense
'preserve the miku-ness'
And they proceed to drown her voice in engine noise. First the audio quality from 2007, second rendering the data with the vocaloid engine(yes they're using renders not raw samples) and overtraining the model with lacking data.
Saki Fujita's singing makes me cry. It's so... alive. We can hear miku from her with so many nuances that old engine couldn't capture.
TRUTH BOMB. Like WHY aren't they just recording acapella singing data from Ms Fujita. V2 samples got so washed over as they reused it for V4x, then V4x samples to SP, then to the training dataset for V6, and even V2 samples are missing a big part of her 'miku' voicing
#VOCALOID News
A demo song for Miku V6's English has been posted, along with officially revealing the V6 art.
미쿠V6데모후기:
야마하는 크립톤한테 겐세이를 개많이 넣어야 합니다
이런식으로 AI만들수없다고 반협박해야합니다
They finally managed to make a Miku database that's better than v4x after all these years. Of course the only reason it's better is because the English is understandable without completely torturous tuning, so it's not that much of a win, but hey it's a win! i'll take it.
@meganesuki.bsky.social 고맙습니다 이제 가네님의 갓아트를 다른사람들이랑 볼수있어요엉엉
AHS콘 선생님 캡쳐를 보면 너무 섹시락스타여서 키요코우를 해야만.
キヨテル