🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Algorithm for making "accessory" meshes fit on a morphable body mesh?

Started by
11 comments, last by l0calh05t 5 years ago

Years ago, before Mixamo Fuse became part of Adobe, I online-rigged and downloaded basically every morph-target and accessory for a base body individually, so I would have all the resources needed to more or less recreate the editor inside Unreal Engine 4. The naked body mesh with all its morph targets was very simple. I just procedurally compared vertex by vertex (and also bone-transforms) of each morphed mesh to the base mesh to get the deltas. The problem is making the base-body accessories (like pants, shirts, hair, glasses etc) look good on a morphed body-mesh. I had some basic ideas, but they looked rather bad for the most part. I tried giving each accessory vertex the same total delta as its originally closest body vertex. Then I tried something similar except with whole triangles. I maintained each clothing vertex' original relative position on its closest body triangle, but that didn't really do either. Pants, shirts and things like that usually look almost acceptable, because the surface of torso and limbs stays rather flat and regular. What's worst is things like glasses and shoes that end up extremely warped with just a bit of body morphing, because there isn't any kind of "stiffness" for the accessories. Mapping vertices to vertices can't be the solution.

The ideal solution would be the same algorithm that is running inside the Fuse 1.3 editor. It's still available on Steam:

The accessory morphing is really fast and happens continually in real time as you morph the body. I could be wrong here, but it doesn't seem as if deltas for accessories are pre-mapped or otherwise pre-computed for a given morph-target. After all you would have to do an extreme amount of work when importing custom accessory meshes then. It rather looks like it's all done dynamically. But how?...

Advertisement

RSI posted a video that explains some of the approaches they used in Star Citizen. On a fairly high level, but interesting nonetheless. Don't remember which one it was though,  and they have so many...

On 5/28/2019 at 4:07 PM, l0calh05t said:

RSI posted a video that explains some of the approaches they used in Star Citizen. On a fairly high level, but interesting nonetheless. Don't remember which one it was though,  and they have so many...

Do you remember how high level it really was? There must be literally hundreds of videos here https://www.youtube.com/user/RobertsSpaceInd/videos and many are an hour or longer. I wouldn't want to bother searching, if they don't even talk about the technical implementation at all.

Maybe this is the one you meant?

Watching now... In any case, thanks for replying.

In the past i did it this way:

Make a high res volume of 3D vectors and inject the deltas from the vertices close enough to the skin mesh, diffuse the volume while keeping the known deltas fixed, and finally transform the cloth vertices with the volume. (Interpolating grid values with linear interpolating the 8 corners of a cell for both injection and transform.)

This worked quite good, but some manual tweaking was necessary here and there.

 

Nowadays i would try to improve this with a harmonic map instead diffusion. This should give results as smooth as possible and no user tweaking of a blur radius is necessary.

This would also work on the mesh itself. The usual way is to calculate cotangent weights for the cloth triangles and calculate the harmonic map from that. (If you are not familiar with this stuff, this should help: http://rodolphe-vaillant.fr/?e=69 The blog also has examples on how to calculate harmonic map and diffusion on meshes.)

Downside of cotangent weights is they only work perfectly on triangle angles less 90 degrees. For good meshes that's usually the case. An alternative is Mean Value Coordinates (papers from Michael Floater). Recently i have replaced most uses of cotanget weights with this and i get clearly better results for UV maps and vector fields on meshes from that.

But when working on the mesh the smoothing happens 'only' over the surface - the volume approach might work better eventually and weights between voxels are trivial to calculate. For example if the cloth mesh has topological handles the volume approach would preserve those proportions better, and if it has inside and outside modeled with different triangles making a thin sheet volume is surely better too (but slow on CPU).

 

Finally there will be still some collisions with skin peeking through tight cloth, and instead manually fixing this resolving them automatically might be less work if you have lots of cloth. Also transfering the skinning weights from skin mesh might be necessary to avoid collisions with animation.

 

Edit: I see there is example of harmonic weights on a grid as well. This should give the idea quickly: http://rodolphe-vaillant.fr/?e=40

 

Thank you so much. This is gonna take me some serious time and effort, but it looks like there is hope for my project yet.

32 minutes ago, Max Power said:

This is gonna take me some serious time and effort

The (recommended) volume approach is really easy, took me only some hours to make it work and there was no need to look at any resources about math. But this said ignoring details about collisions.

However, my proposals are not realtime.

For that maybe it would work to make each skin vertex tangent space a bone and do regular skinning of the cloth? Yes - i remember i had tried this. But with cloth distant to the skin the involved rotation gave me too much distortion (with tight cloth it worked fine), so i kept using the simpler volume method. Btw, to perform high quality smoothing of bone weights again cotangent weights would be the proper tool as this allows to diffuse values evenly in all directions over irregular triangles. I guess the skinning idea would have worked for me if i had known this math back the days already.

20 hours ago, Max Power said:

Do you remember how high level it really was? There must be literally hundreds of videos here https://www.youtube.com/user/RobertsSpaceInd/videos and many are an hour or longer. I wouldn't want to bother searching, if they don't even talk about the technical implementation at all.

Maybe this is the one you meant?

Watching now... In any case, thanks for replying.

No, I don' think so. I'm not entirely sure, but it may have been these two panels:

 

 

In any case IIRC, they talked about fitting simpler geometries like ellipses to the morphed geometry for "rigid" attachments, also solving potential problems with asymmetry.

On 6/1/2019 at 7:31 PM, JoeJ said:

Interpolating grid values with linear interpolating the 8 corners of a cell for both injection and transform.

 

First question (of many...): how exactly should I calculate this?

Vector += VertexDeltaVector * [[ 0..1, X? / DistanceVertexToCorner ]]???

And should I also average the cell vectors against their vertex/injection-count afterwards? Weighted?

 

You see... this is why it's gonna take me considerably more time and effort than you ^^

5 minutes ago, Max Power said:

Vector += VertexDeltaVector * [[ 0..1, X? / DistanceVertexToCorner ]]???

Taking the 1D case as example would be:

vertex1, position 4.3 and delta 0.6

vertex2, position 4.9 and delta 0.8

so vertex 1 would contribute to grid cell 4 with a weight of 0.7 and cell 5 with weight 0.3

and vertex 2 would contribute to grid cell 4 with a weight of 0.1 and cell 5 with weight 0.9

so we sum up in the grid cells both deltas multiplied by weight and just the weights seperately:

cell 4 = (0.6 * 0.7, 0.7) + (0.8 * 0.1, 0.1) = (0.5, 0.8)

cell 5 = (0.6 * 0.3, 0.3) + (0.8 * 0.9, 0.9) = (0.9, 1.2)

after all vertices have been injected we divide for each cell the sum of weighted deltas by the sum of weights to get the final deltas:

cell 4: 0.5 / 0.8 = 0.625

cell 5: 0.9 / 1.2 = 0.75

which is not far from the input deltas at those positions, so the math seems right. (I hope so at least :) )

 

With more dimensions you just multiply the weights together (no euclidean distance to grid corner, but manhattan distance if you want).

 

Important: The above weighting is just a naive approach to deal with over/undersampling problem. There are other options, e.g. finding the closest grid corner for each vertex and snapping to this fo injection would also make sense because it avoids divisions by small weights. But for the boundary we reject reject small weights anyways so the above seems to work well.

I took my old code and refactored it a bit (likely i have some use for that too):


	namespace Volume
	{
		template <typename T>
		static void Inject (
			std::vector<T> &volumeData, // weighted volume values data
			std::vector<float> *volumeWeightAccum, // if given we try to average multiple samples per cell
			const int dimX, const int dimY, const int dimZ, // volume dimensions
			const vec position, const T value) // sample position (already scaled and transformd to volume space) and value
		{
			// calc integer grid indices and weight factors
			float xf = position[0];
			float yf = position[1];
			float zf = position[2];
			int xI = (int) xf; if (xI<0 || xI>=dimX-1) return;
			int yI = (int) yf; if (yI<0 || yI>=dimY-1) return;
			int zI = (int) zf; if (zI<0 || zI>=dimZ-1) return;
			xf -= (float)xI;
			yf -= (float)yI;
			zf -= (float)zI;
			float xg = 1 - xf;
			float yg = 1 - yf;
			float zg = 1 - zf;

			// indices to 8 cells
			int g = zI*(dimY*dimX) + yI*dimX + xI;
			int g000 = g;
			int g100 = g + 1;
			int g010 = dimX + g;
			int g110 = dimX + g + 1;
			int g001 = dimY*dimX + g;
			int g101 = dimY*dimX + g + 1;
			int g011 = dimY*dimX + dimX + g;
			int g111 = dimY*dimX + dimX + g + 1;

			// inject weighted sample
			volumeData[g000] += value * xg*yg*zg;
			volumeData[g100] += value * xf*yg*zg;
			volumeData[g010] += value * xg*yf*zg;
			volumeData[g110] += value * xf*yf*zg;
			volumeData[g001] += value * xg*yg*zf;
			volumeData[g101] += value * xf*yg*zf;
			volumeData[g011] += value * xg*yf*zf;
			volumeData[g111] += value * xf*yf*zf;

			if (volumeWeightAccum)
			{
				(*volumeWeightAccum)[g000] += xg*yg*zg;
				(*volumeWeightAccum)[g100] += xf*yg*zg;
				(*volumeWeightAccum)[g010] += xg*yf*zg;
				(*volumeWeightAccum)[g110] += xf*yf*zg;
				(*volumeWeightAccum)[g001] += xg*yg*zf;
				(*volumeWeightAccum)[g101] += xf*yg*zf;
				(*volumeWeightAccum)[g011] += xg*yf*zf;
				(*volumeWeightAccum)[g111] += xf*yf*zf;
			}
		}

		template <typename T>
		static void NormalizeAndExtractBoundary (
			std::vector< std::pair<int, T> > &boundary, // list of values to keep fixed <grind index, value> 
			std::vector<T> &volumeData, 
			const int dimX, const int dimY, const int dimZ,
			const std::vector<float> &volumeWeightAccum, 
			const float minBoundaryWeight = 0.3f) // must be > 0
		{
			boundary.clear();

			for (int i=0; i<dimX*dimY*dimZ; i++)
			{
				float auccumW = volumeWeightAccum[i];
				if (auccumW >= minBoundaryWeight)
				{
					volumeData[i] *= 1/auccumW; // normalize
					boundary.push_back(std::make_pair(i, volumeData[i])); // store as boundary cell
				}
			}
			//// add the boundary of the volume as well as zeroes // nope - better handle this in the solver automatically by simply skipping volume boundary
			//for (int x=0; x<dimX; x+=dimX-1)
			//for (int y=0; y<dimY; y+=dimY-1)
			//for (int z=0; z<dimZ; z+=dimZ-1)
			//{
			//	int g = z*(dimY*dimX) + y*dimX + x;
			//	boundary.push_back(std::make_pair(i, T(0)));
			//}
		}

		template <typename T>
		static void SolveForHarmonicMap (
			std::vector<T> &volumeData, 
			const std::vector< std::pair<int, T> > &boundary, 
			const int dimX, const int dimY, const int dimZ,
			const int maxIterations)
		{
			// todo: very slow! Downsample volume, solve, upsample and start the full res solve with the upsampled guess; OpenCL?
			
			std::vector<T> temp;
			temp.resize(volumeData.size(), T(0));
			for (int iter = 0; iter < maxIterations; iter++)
			{
				for (int x=1; x<dimX-1; x++)
				for (int y=1; y<dimY-1; y++)
				for (int z=1; z<dimZ-1; z++)
				{
					T valueSum(0);
					//float weightSum = 0;
					int g = z*(dimY*dimX) + y*dimX + x;
					for (int u=-1; u<=1; u++)
					for (int v=-1; v<=1; v++)
					for (int w=-1; w<=1; w++)
					{
						if (!u&&!v&&!w) continue;
						int ng = g + w*(dimY*dimX) + v*dimX + u;
						float weight = (u ? 1 : 2) * (v ? 1 : 2) * (w ? 1 : 2);
						//if (!iter&&x==1&&y==1&&z==1) SystemTools::Log("weight %f\n", weight);
						//weightSum += weight;
						valueSum += volumeData[ng] * weight;
					}
					//if (!iter&&x==1&&y==1&&z==1) SystemTools::Log("weightSum %f\n\n", weightSum);
					//valueSum *= 1 / weightSum;
					valueSum *= 1.f / 56.f;
					temp[g] = valueSum;
				}

				// fix boundary
				for (int i=0; i<boundary.size(); i++)
				{
					temp[boundary[i].first] = boundary[i].second;
				}

				// todo: break if difference between temp and volumeData is small enough, likely make this check every 8 iterations or so
				volumeData = temp;
			}

		}

		template <typename T>
		static T Sample (
			const vec position,
			std::vector<T> &volumeData,
			const int dimX, const int dimY, const int dimZ
			)
		{
			// calc integer grid indices and weight factors
			float xf = position[0];
			float yf = position[1];
			float zf = position[2];
			int xI = (int) xf; if (xI<0 || xI>=dimX-1) return T(0);
			int yI = (int) yf; if (yI<0 || yI>=dimY-1) return T(0);
			int zI = (int) zf; if (zI<0 || zI>=dimZ-1) return T(0);
			xf -= (float)xI;
			yf -= (float)yI;
			zf -= (float)zI;
			float xg = 1 - xf;
			float yg = 1 - yf;
			float zg = 1 - zf;

			// indices to 8 cells
			int g = zI*(dimY*dimX) + yI*dimX + xI;
			int g000 = g;
			int g100 = g + 1;
			int g010 = dimX + g;
			int g110 = dimX + g + 1;
			int g001 = dimY*dimX + g;
			int g101 = dimY*dimX + g + 1;
			int g011 = dimY*dimX + dimX + g;
			int g111 = dimY*dimX + dimX + g + 1;

			T sample (0);
			sample += volumeData[g000] * xg*yg*zg;
			sample += volumeData[g100] * xf*yg*zg;
			sample += volumeData[g010] * xg*yf*zg;
			sample += volumeData[g110] * xf*yf*zg;
			sample += volumeData[g001] * xg*yg*zf;
			sample += volumeData[g101] * xf*yg*zf;
			sample += volumeData[g011] * xg*yf*zf;
			sample += volumeData[g111] * xf*yf*zf;
			return sample;		
		}
		

	};

I have tested it with this:


		static bool visVolume = 1; ImGui::Checkbox("visVolume", &visVolume);
		if (visVolume)
		{
			static int init = 1;


			static HEMesh mesh;

			static std::vector<vec> volumeData;
			static std::vector< std::pair<int, vec> > boundary;
			const int dim = 32;
			static int solverIterationsCount = 0;

			if (init || ImGui::Button("Reload")) 
			{
				//((HEMesh_Serialization&)mesh).LoadMesh ("C:\\dev\\pengII\\temp\\template_mesh.mesh");
				((HEMesh_Serialization&)mesh).LoadMesh ("C:\\dev\\pengII\\mod\\bunny closed.MTC10.000000.hem");
				// transorm mesh vertices to fit into volume (mesh is centered and has a size of about 20)
				for (int i=0; i<mesh.GetVertexCount(); i++)
				{
					vec p = mesh.GetVertexPos(i);
					p *= float (dim) / 10.f;
					p += vec(float (dim/2));
					mesh.SetVertexPos(i, p);
				}	

				volumeData.clear();
				volumeData.resize(dim*dim*dim, vec(0)); // important to init with zeroes
				std::vector<float> volumeWeightAccum;
				volumeWeightAccum.resize(dim*dim*dim, 0);

				// inject vertices, using their normal as example delta
				for (int i=0; i<mesh.GetVertexCount(); i++)
				{
					vec p = mesh.GetVertexPos(i);
					vec delta = mesh.CalcVertexNormal(i) * -float(dim)/10; // normals point inwards
					Volume::Inject (volumeData, &volumeWeightAccum, 
						dim,dim,dim, p, delta);
				}

				Volume::NormalizeAndExtractBoundary (boundary, volumeData, dim,dim,dim, 
					volumeWeightAccum);

				solverIterationsCount = 0;
				
				//Volume::SolveForHarmonicMap (volumeData, boundary, dim,dim,dim, dim*4); // dim*4 seems a good value in any case it seems
				//std::vector< std::pair<int, vec> > emptyBoundary;
				//Volume::SolveForHarmonicMap (volumeData, emptyBoundary, dim,dim,dim, 3); // without boundary this acts as of smoothing
			}

			static bool solvePerFrame = 0; ImGui::Checkbox("solvePerFrame", &solvePerFrame);
			if (solvePerFrame) 
			{
				Volume::SolveForHarmonicMap (volumeData, boundary, dim,dim,dim, 1);
				solverIterationsCount++;
			}
			ImGui::Text("solverIterationsCount %i", solverIterationsCount);
				
			
			static bool visTemplate = 0; ImGui::Checkbox("visTemplate", &visTemplate);
			if (visTemplate) ((HEMesh_Vis&)mesh).RenderMesh(0, false);

			static bool visDeformed = 0; ImGui::Checkbox("visDeformed", &visDeformed);
			if (visDeformed) 
			{
				for (int i=0; i<mesh.GetVertexCount(); i++)
				{
					vec p = mesh.GetVertexPos(i);
					vec delta = Volume::Sample(p, volumeData, dim,dim,dim);
					p += delta;
					RenderPoint(p, 1,1,1);
				}
			}

			static bool visSlice = 1; ImGui::Checkbox("visSlice", &visSlice);
			if (visSlice) 
			{
				float z=dim/2;
				for (float x=0; x<=dim; x+=0.5f)
				for (float y=0; y<=dim; y+=0.5f)
				{
					vec p(x,y,z);
					vec delta = Volume::Sample(p, volumeData, dim,dim,dim); 
					RenderArrow(p, delta, 0.1f, 1,1,1);
					//RenderPoint(p, 0.1f, 1,1,1);
				}

			}
			init = 0;

		}

And i get this result (using normals as deltas):

image.thumb.png.99600485a363d3537864df252a8d6167.png

The distance from the model to the volume boundary affects the results.

If it's too small, the deltas shrink too quickly when cloth is far from the skin.

If it's too large the solver needs more runtime.

 

 

Thanks again for your help. While I'm writing my code based on yours, I'm wondering about the whole "boundary" idea. First I thought you were just marking nodes to be exluded from the "smoothing" process and limit the interpolation to areas in between fixed values, but then you could have just skipped those nodes/cells alltogether.

... actually, nevermind. It's just a different approach I guess.

This topic is closed to new replies.

Advertisement