我正在使用探戈扫描3D物体。现在我在屏幕上触摸4个点来制作一个矩形,然后删除此矩形外的所有点云。如何通过在Tango手机上创建与触摸相关的点云来过滤点云
但我已经检查了Tango SDK中提供的“Point To Point”示例,但我不明白点云和触摸位置在屏幕上的关系。在“TangoPointCloud.cs”文件,点云转化为世界空间,如下所示:
public void OnTangoPointCloudAvailable(TangoPointCloudData pointCloud)
{
m_mostRecentPointCloud = pointCloud;
// Calculate the time since the last successful depth data
// collection.
if (m_depthTimestamp != 0.0)
{
m_depthDeltaTime = (float)((pointCloud.m_timestamp - m_depthTimestamp) * 1000.0);
}
// Fill in the data to draw the point cloud.
m_pointsCount = pointCloud.m_numPoints;
if (m_pointsCount > 0)
{
_SetUpCameraData();
DMatrix4x4 globalTLocal;
bool globalTLocalSuccess = m_tangoApplication.GetGlobalTLocal(out globalTLocal);
if (!globalTLocalSuccess)
{
return;
}
DMatrix4x4 unityWorldTGlobal = DMatrix4x4.FromMatrix4x4(TangoSupport.UNITY_WORLD_T_START_SERVICE) * globalTLocal.Inverse;
TangoPoseData poseData;
// Query pose to transform point cloud to world coordinates, here we are using the timestamp that we get from depth.
bool poseSuccess = _GetDevicePose(pointCloud.m_timestamp, out poseData);
if (!poseSuccess)
{
return;
}
DMatrix4x4 unityWorldTDevice = unityWorldTGlobal * DMatrix4x4.TR(poseData.translation, poseData.orientation);
// The transformation matrix that represents the point cloud's pose.
// Explanation:
// The point cloud, which is in Depth camera's frame, is put in Unity world's
// coordinate system(wrt Unity world).
// Then we are extracting the position and rotation from uwTuc matrix and applying it to
// the point cloud's transform.
DMatrix4x4 unityWorldTDepthCamera = unityWorldTDevice * m_deviceTDepthCamera;
transform.position = Vector3.zero;
transform.rotation = Quaternion.identity;
// Add offset to the point cloud depending on the offset from TangoDeltaPoseController.
if (m_tangoDeltaPoseController != null)
{
m_mostRecentUnityWorldTDepthCamera = m_tangoDeltaPoseController.UnityWorldOffset * unityWorldTDepthCamera.ToMatrix4x4();
}
else
{
m_mostRecentUnityWorldTDepthCamera = unityWorldTDepthCamera.ToMatrix4x4();
}
// Converting points array to world space.
m_overallZ = 0;
for (int i = 0; i < m_pointsCount; ++i)
{
Vector3 point = pointCloud[i];
m_points[i] = m_mostRecentUnityWorldTDepthCamera.MultiplyPoint3x4(point);
m_overallZ += point.z;
}
m_overallZ = m_overallZ/m_pointsCount;
m_depthTimestamp = pointCloud.m_timestamp; //m_timestamp is the time of capture of point cloud
// For debugging
if (m_updatePointsMesh)
{
// Need to update indices too!
int[] indices = new int[m_pointsCount];
for (int i = 0; i < m_pointsCount; ++i)
{
indices[i] = i;
}
m_mesh.Clear();
m_mesh.vertices = m_points;
m_mesh.SetIndices(indices, MeshTopology.Points, 0);
}
// The color should be pose relative; we need to store enough info to go back to pose values.
m_renderer.material.SetMatrix("depthCameraTUnityWorld", m_mostRecentUnityWorldTDepthCamera.inverse);
// Try to find the floor using this set of depth points if requested.
if (m_findFloorWithDepth)
{
_FindFloorWithDepth();
}
}
else
{
m_overallZ = 0;
}
我试图从屏幕空间转换到触摸位置使用Camera.main.ScreenToWorldPoint(世界空间)。我也尝试将所有点云从世界空间转换到屏幕空间。显示的点云和触摸位置在屏幕上是相同的,但转换后的值完全不同。
有没有人知道这个问题?请帮帮我。谢谢。
感谢您的回复。我会检查并让你知道结果。 – minpu