I've been looking at the cmorph_pc format for the past few days to try to get morph targets into Blender as shape keys. The format is pretty easy to parse, with the basic structure looking like this:
All is well until I get to decoding the morph positions and morph normals. Each morph vertex is 12 (0x0C) bytes. The position values are some kind of 16-bit fixed-point values for which I cannot figure out the encoding (typically most fixed-point encodings read the 16-bit value, subtract a bias value and then divide the result by a scalar but nothing I really tried worked). Also, the morph target bounding boxes typically much larger than what I expect them to be.
Was wondering if I could get some hints on how the morph vertices are encoded please
? No matter what fixed-point encoding I use, once I add the morph data to the base head data I get something like this
.
Thanks,
Steven
Code:
--
-- CCMORPH_PC FILE FORMAT
--
0x20-byte header
----------------
uint32: 0x1337BEEF
uint32: 0x00000005
uint32: 0x00000000
uint32: 0x00000000
uint32: 0x0BADBEEF
uint32: 0x00000003
uint32: 0x00000001
uint32: number of morph targets
0x10-byte items
---------------
for(uint32 i = 0; i < number of morph targets; i++) {
uint32: 0x00000000
uint32: 0x00000000
uint32: unknown (possible morph target ID, possible fixed-point scale/basis values)
uint32: 0x00000001
}
0x28-byte items
---------------
for(uint32 i = 0; i < number of morph targets; i++) {
uint32: 0x00000000
uint32: 0x00000000
uint32: 0x00000000
uint16: number of morphed vertices
uint16: largest vertex index used
real32: min_x (morph target bounding box)
real32: min_y (morph target bounding box)
real32: min_z (morph target bounding box)
real32: max_x (morph target bounding box)
real32: max_y (morph target bounding box)
real32: max_z (morph target bounding box)
}
morph target data
-----------------
for(uint32 i = 0; i < number of morph targets; i++)
{
seek next 0x10-byte aligned position
for(uint32 j = 0; j < number of morphed vertices; j++)
{
uint16: position_x
uint16: position_y
uint16: position_z
uint16: vertex_index
uint08: normal_x
uint08: normal_y
uint08: normal_z
uint08: normal_w (always 0)
}
}
All is well until I get to decoding the morph positions and morph normals. Each morph vertex is 12 (0x0C) bytes. The position values are some kind of 16-bit fixed-point values for which I cannot figure out the encoding (typically most fixed-point encodings read the 16-bit value, subtract a bias value and then divide the result by a scalar but nothing I really tried worked). Also, the morph target bounding boxes typically much larger than what I expect them to be.
Was wondering if I could get some hints on how the morph vertices are encoded please



Thanks,
Steven

Last edited: