Last Updated on June 4, 2022
Isn’t it fantastic when you’re conversing with someone online and you can watch their mouth move as they speak? It significantly enhances the experience, particularly in Virtual Reality.
That is exactly what this is all about. Making various forms so that you can see yourself speaking in a mirror. It’s the features that elevate something good to exceptional. However, some experience issues with this features. This guide contain how to fix this problem.
Visemes Not Working – How To Fix It
It’s probably something to do with the viseme-handling animation controller layer.
The layer’s weight may be 0, or anything. On second thought, that is doubtful because it would not be the case unless you personally modified it.
Alternatively, if you have more than one copy of the avatar in the project, the body/visemes from the copy you submitted are most likely allocated to the visemes instead of the one you uploaded.
There’s no reason they shouldn’t function if the viseme blendshapes operate in unity and you manually set them in the vrc descriptor, afaik.
You may manually assign viseme blendshapes.
Check to see if your avatar’s animations and gestures have a number of visemes and shapekeys set to 0 on them. This may result in all of your talking visemes’ avatars being set to 0 as a result of this. If this is the case, just delete all of the 0 shapekeys from your animations.
Also, putting an animation on your avatar’s resting stance might cause this; make sure nothing is connected to that.
What Exactly Is Visemes?
Oculus Lipsync translates human speech into a collection of mouth forms known as “visemes,” which are a visual representation of phonemes.
The mouth form for a certain collection of phonemes is represented by each viseme.
VRChat detects phonemes through a microphone and adjusts your character’s mouth to the relevant shapes, creating the appearance that he or she is speaking.
How To Export and Import Visemes on VRChat
After you’ve prepared all of the shapes, you can now export the entire bundle. Go to export and select all of the forms, meshes, and bones.
Check the option for Animation and make sure Blend Shapes is turned on as well; if it isn’t, it won’t export successfully.
Now type in the desired name and export it.
You should already have Unity 2018.4.20f1 (or whatever version VRChat uses) installed.
Create a new component named VRC Avatar Descriptor after the character has been imported.Now a few criteria will be shown for you to choose from.
Visemes View Position
This option allows you to specify the location of the first-person point of view. To put it another way, from where will you be able to view content within VRChat?It is self-evident that the little indication should be placed at eye level. as near to the eyeballs as possible.
Visemes Lip Sync
How do we get our characters to speak up? With this particular option, Select the Viseme Blend Shape in Mode.
A Face Mesh option will now appear. You may pick the mesh where the Blend Shape Visemes are saved by using the little circle on the right. We only have one option in this scenario because it’s all the same mesh.
Now we’re getting down to business (pun intended). Using the correct names makes our lives simpler. Each and every blend form is in place. But, just to be sure, have a peek at it.
Visemes Eye Look
If you have keen eyes, you could have noticed that blink was missing (These puns just keep coming). That’s because we’ll adjust it using the Eye Look tab.When you click Enable, a menu of options will display.
Ignore the rest and move to the Eyelids area, where you may choose blendshapes.
Select the mesh where the blendshapes are saved once more.