亚洲香蕉成人av网站在线观看_欧美精品成人91久久久久久久_久久久久久久久久久亚洲_热久久视久久精品18亚洲精品_国产精自产拍久久久久久_亚洲色图国产精品_91精品国产网站_中文字幕欧美日韩精品_国产精品久久久久久亚洲调教_国产精品久久一区_性夜试看影院91社区_97在线观看视频国产_68精品久久久久久欧美_欧美精品在线观看_国产精品一区二区久久精品_欧美老女人bb

首頁 > 學院 > 開發設計 > 正文

Tango概念

2019-11-09 14:26:42
字體:
來源:轉載
供稿:網友

Overview 概述 

Tango Concepts  Tango 概念

Tango is a platform that uses computer vision to give devices theability to understand their position relative to the world around them. It’ssimilar to how you use your eyes to find your way to a room, and then to knowwhere you are in the room and where the floor, the walls, and objects aroundyou are. These physical relationships are an essential part of how we movethrough our daily lives. Tango gives mobile devices this kind ofunderstanding by using three core technologies: Motion Tracking, Area Learning,and Depth Perception.

Tango是一個使用計算機視覺的平臺,讓設備能夠了解他們相對于周圍世界的位置。它類似于你如何使用你的眼睛找到你的房間的方式,然后知道你在房間里的地方,墻壁和你周圍的對象在哪里。這些身體關系是我們如何在我們的日常生活中不可或缺的一部分。Tango通過使用三種核心技術為移動設備提供這種理解:運動跟蹤,區域學習和深度感知。

Motion Tracking overview  運動跟蹤概述

Motion Tracking means that a Tango device can track its own movementand orientation through 3D space. Walk around with a device and move itforward, backward, up, or down, or tilt it in any direction, and it can tell youwhere it is and which way it's facing. It's similar to how a mouse works, butinstead of moving around on a flat surface, the world is your mousepad.

運動跟蹤意味著Tango設備可以通過3D空間跟蹤其自身的運動和方向。用設備走動,向前,向后,向上或向下移動,或向任何方向傾斜,它可以告訴你它在哪里,它面對的方式。它類似于鼠標的工作原理,但不是在平坦的表面上移動,世界是你的鼠標墊。

Learn more about Motion Tracking.詳細了解運動跟蹤。

Area Learning overview  區域學習概述

Human beings learn to recognize where they are in an environment by noticingthe features around them: a doorway, a staircase, the way to the nearestrestroom. Tango gives your mobile device the same ability. With MotionTracking alone, the device "sees" the visual features of the area it is movingthrough but doesn’t "remember" them.

人類通過注意它們周圍的特征來學習識別他們在環境中的位置:門口,樓梯,通往最近的洗手間的方式。探戈給您的移動設備相同的能力。僅使用運動跟蹤,設備“看到”它正在移動通過的區域的視覺特征,但不“記住”它們。

With Area Learning turned on, the device not only remembers what it sees, itcan also save and recall that information. When you enter a PReviously savedarea, the device uses a process called localization to recognize where you arein the area. This feature opens up a wide range of creative applications. Thedevice also uses Area Learning to improve the accuracy of Motion Tracking.

啟用區域學習后,設備不僅會記住它看到的內容,還可以保存和調用該信息。當您輸入先前保存的區域時,設備使用稱為本地化的過程來識別您在該區域中的位置。此功能打開了廣泛的創意應用程序。該設備還使用區域學習來提高運動跟蹤的準確性。

Learn more about Area Learning.  詳細了解區域學習。

Depth Perception overview     深度感知概述

With depth perception, your device can understand the shape of yoursurroundings. This lets you create "augmented reality," where virtual objectsnot only appear to be a part of your actual environment, they can also interactwith that environment. One example: you create a virtual character who jumpsonto and then skitters across a real-world table top.

使用深度感知,您的設備可以了解您的周圍環境的形狀。這允許您創建“增強現實”,其中虛擬對象不僅看起來像是您的實際環境的一部分,他們也可以與該環境交互。一個例子:你創建一個虛擬角色,跳到現實世界的桌面上然后跳過。

Learn more about Depth Perception.  詳細了解深度感知。

What Are Tango Poses? 什么是Tango姿勢?

As your device moves through 3D space, it calculates where it is (position) andhow it's rotated (orientation) up to 100 times per second. A single instance ofthis combined calculation is called the device'spose. The pose is anessential concept when working with motion tracking, area learning, or depthperception.

當您的設備移動通過3D空間時,它計算它的位置(位置)及其旋轉(方向)高達每秒100次。此組合計算的單個實例稱為設備的姿勢。當使用運動跟蹤,區域學習或深度感知時,姿勢是一個基本概念。

To calculate the poses, you must choose base and target frames of reference, which may use different coordinate systems. You can view apose as the translation and rotation required to transform vertices from thetarget frame to the base frame.

要計算姿勢,您必須選擇基準和目標參考系,這可能使用不同的坐標系。您可以將姿勢視為將頂點從目標幀轉換為基本幀所需的平移和旋轉。

Here is a simplified version of a Tango pose structin C:

這里是一個簡化版本的探戈姿勢結構在C:

struct PoseData {    double orientation[4];    double translation[3];}

The two key components of a pose are:

姿勢的兩個關鍵組成部分是:

A quaternion that defines the rotation of the target frame with respect to the base frame. 定義目標幀相對于基本幀的旋轉的四元數。

A 3D vector that defines the translation of the target frame with respect to the base frame. 定義目標幀相對于基本幀的平移的3D向量。

An actual pose struct contains other fields, such as a timestamp and a copy ofthe frame pair, as you'll see below.

實際的姿態結構包含其他字段,如時間戳和幀對的副本,如下所示。

Note: The examples on this page use the C API, but function calls and datastructures are similar for java. In Unity, there are prefabs which handle a lotof these details for you.注意:此頁面上的示例使用C API,但對于Java,函數調用和數據結構類似。在Unity中,有預處理為你處理很多這些細節。

Pose data  姿勢數據

You can request pose data in two ways:  您可以通過兩種方式請求姿勢數據:

Request Method #1  請求方法#1

Poll for poses using TangoService_getPoseAtTime().This returns the pose closest to a given timestamp from the base to the targetframe. Here is the code for this function in the C API:

使用TangoService_getPoseAtTime()調查姿勢。這將返回最接近給定時間戳的從基址到目標幀的姿態。這里是C API中此函數的代碼:

TangoErrorType TangoService_getPoseAtTime(    double timestamp,     TangoCoordinateFramePair frame_pair,     TangoPoseData* pose);The TangoCoordinateFramePairstruct specifies the base frame and the target frame.

TangoCoordinateFramePair結構指定基本幀和目標幀。

Note: If you are making an augmented reality app, we recommend that you useTangoService_getPoseAtTime() orTangoSupport_getPoseAtTime() because, in addition to polling for poses, they allow you to align the pose timestamps with the video frames.

注意:如果您正在制作增強現實應用程序,我們建議您使用TangoService_getPoseAtTime()或TangoSupport_getPoseAtTime(),因為除了輪詢姿勢,他們允許您將姿勢時間戳與視頻幀對齊。

The following code gets a pose of the device frame with respect to the start-of-service frame:

以下代碼獲取設備幀相對于服務啟動幀的姿態:

TangoPoseData pose_start_service_T_device;TangoCoordinateFramePair frame_pair;frame_pair.base = TANGO_COORDINATE_FRAME_START_OF_SERVICE;frame_pair.target = TANGO_COORDINATE_FRAME_DEVICE;TangoService_getPoseAtTime(    timestamp,    frame_pair,    &pose_start_service_T_device);

In this example, including the names of the base and target frames in the posevariable name makes the name more descriptive:

在此示例中,在姿勢變量名稱中包括基準和目標框架的名稱使名稱更具描述性:

TangoPoseData pose_start_service_T_device;

Request Method #2   請求方法#2

Receive pose updates as they become available. To do so,attach an onPoseAvailable() callback toTangoService_connectOnPoseAvailable().This sample is from ourhello_motion_trackingexample project and can be found in thetango_handler.cc file:

接收姿勢更新,因為它們變得可用。為此,請將onPoseAvailable()回調附加到TangoService_connectOnPoseAvailable()。這個示例來自我們的hello_motion_tracking示例項目,可以在tango_handler.cc文件中找到:

TangoCoordinateFramePair pair;pair.base = TANGO_COORDINATE_FRAME_START_OF_SERVICE;pair.target = TANGO_COORDINATE_FRAME_DEVICE;if (TangoService_connectOnPoseAvailable(1, &pair, onPoseAvailable) !=    TANGO_SUCCESS) {  LOGE("TangoHandler::ConnectTango, connectOnPoseAvailable error.");  std::exit(EXIT_SUCCESS);In both cases, you receive a TangoPoseData struct:

在這兩種情況下,您都會收到一個TangoPoseData結構:

typedef struct TangoPoseData {  int version;  double timestamp;                // In milliseconds  double orientation[4];           // As a quaternion  double translation[3];           // In meters  TangoPoseStatusType status_code;  TangoCoordinateFramePair frame;  int confidence;                  // Currently unused  float accuracy;                  // Currently unused} TangoPoseData;

Pose status   姿勢狀態

TangoPoseData contains a state, denoted by theTangoPoseStatusTypeenum, which provides information about the status of the pose estimationsystem. The available TangoPoseStatusType members are:

TangoPoseData包含由TangoPoseStatusType枚舉指示的狀態,其提供關于姿態估計系統的狀態的信息??捎玫腡angoPoseStatusType成員是:

typedef enum {  TANGO_POSE_INITIALIZING = 0,  TANGO_POSE_VALID,  TANGO_POSE_INVALID,  TANGO_POSE_UNKNOWN} TangoPoseStatusType;

INITIALIZING: The motion tracking system is either starting or recovering froman invalid state, and the pose data should not be used.

VALID: The system believes the poses being returned are valid and should beused.

INVALID: The system has encountered difficulty of some kind, so poseestimations are likely incorrect.

UNKNOWN: The system is in an unknown state.

INITIALIZING:運動跟蹤系統正在啟動或從無效狀態恢復,并且不應使用姿態數據。VALID:系統認為返回的姿勢有效,應該使用。INVALID:系統遇到某種困難,因此姿態估計可能不正確。UNKNOWN:系統處于未知狀態。

Lifecycle of pose status   姿勢狀態的生命周期

Figure 1: Tango Pose data lifecycle  圖1:探戈姿勢數據生命周期

The TANGO_POSE_INITIALIZING status code indicates that the Tangoframework is initializing and pose data is not yet available. If you are usingcallbacks, you will receive only one pose update with the status code set toTANGO_POSE_INITIALIZING while the framework is initializing.

TANGO_POSE_INITIALIZING狀態代碼表示Tango框架正在初始化,并且姿勢數據尚不可用。如果您使用回調,則在框架正在初始化時,您將只收到一個狀態代碼設置為TANGO_POSE_INITIALIZING的姿勢更新。

After initialization finishes, poses are in the TANGO_POSE_VALID state. If youare using callbacks, you will receive updates as frequently as they areavailable.

初始化完成后,姿勢處于TANGO_POSE_VALID狀態。如果您正在使用回調,您將收到更新的頻率,因為他們可用。

If the system encounters difficulty and enters the TANGO_POSE_INVALID state,recovery depends on your configuration during initialization. Ifconfig_enable_auto_recovery is set toTrue, the system immediately resetsthe motion tracking system and enters theTANGO_POSE_INITIALIZING state. Ifconfig_enable_auto_recovery is set toFalse, pose data remains in theTANGO_POSE_INVALID state and no updates are received until you callTangoService_resetMotionTracking().

如果系統遇到困難并進入TANGO_POSE_INVALID狀態,則恢復取決于初始化期間的配置。如果config_enable_auto_recovery設置為True,系統將立即重置運動跟蹤系統并進入TANGO_POSE_INITIALIZING狀態。如果config_enable_auto_recovery設置為False,則姿勢數據保持在TANGO_POSE_INVALID狀態,并且在調用TangoService_resetMotionTracking()之前不會接收更新。

Using pose status   使用姿勢狀態

Your application should react to the status being returned within the posedata. For example, wait until the pose data you are interested in becomes validbefore starting interactions in your application. If the pose becomes invalid,pause interactions until after the system recovers. Depending on yourapplication, what you do after the system recovers will vary. If you are usingmotion tracking alone, you can simply resume your application. If you are usingarea learning or ADFs, instruct your user to move around until the device canlocalize itself.

您的應用程序應對姿勢數據中返回的狀態做出反應。例如,等到您感興趣的姿勢數據在您的應用程序中開始交互之前變得有效。如果姿勢無效,請暫停交互,直到系統恢復。根據您的應用程序,您在系統恢復后執行的操作將有所不同。如果您僅使用運動跟蹤,則可以簡單地恢復應用程序。如果您使用區域學習或ADF,請指導您的用戶移動,直到設備可以本地化。

What Are Tango Events?什么是Tango事件?

Primary events   主要事件

The unique array of sensors on a Tango-capable device, and the data they receiveand transmit, are essential to enabling the device to perform the core Tangoactivities:motion tracking, depth sensing,and area learning. As with any Android device,this data is received and passed along as "events." The primary Tango eventsare:

具有探測功能的設備上的傳感器的獨特陣列,以及它們接收和傳輸的數據對于使設備能夠執行核心探測活動是必要的:運動跟蹤,深度感測和區域學習。與任何Android設備一樣,此數據將作為“事件”接收和傳遞。主要Tango事件是:

The pose (position and orientation) of the device.    設備的姿勢(位置和方向)。Frames and textures from a camera.    從照相機的框架和紋理。Point clouds, which are generated via depth sensing.   點云,通過深度感測生成。

To receive an event, you pass a callback to a function that listens for thatevent. The callback is called every time the event occurs.

要接收事件,您將回調傳遞給偵聽該事件的函數。每當事件發生時調用回調。

Note: The examples below use the C API, but function calls and data structuresare similar for all platforms.注意:下面的示例使用C API,但是所有平臺的函數調用和數據結構類似。

To receive poses  接受姿勢

There are are two ways to do this; we recommend that you poll for poses using TangoService_getPoseAtTime(). For a detailedexplanation, see thePose topic.

有兩種方法可以做到這一點;我們建議您使用TangoService_getPoseAtTime()輪詢姿勢。有關詳細說明,請參閱“姿勢”主題。

To receive notification of new camera textures  接收新照相機紋理的通知

(This is useful to render the camera image in OpenGL for Augmented Reality)  (這對于在OpenGL中渲染攝像機圖像是有用的增強現實)

Pass a callback to TangoService_OnTextureAvailable().You will also need to call TangoService_updateTextureExternalOesin the GL thread to update the texture.

將回調傳遞給TangoService_OnTextureAvailable()。你還需要在GL線程中調用TangoService_updateTextureExternalOes來更新紋理。

To receive frames from a camera  從相機接收幀

(Use this not for display, but for when you need to access the pixel data as an array of bytes.)

(使用這不是用于顯示,但是當你需要訪問像素數據作為一個字節數組)。

Pass a callback to TangoService_connectOnFrameAvailable(). 將回調傳遞給TangoService_connectOnFrameAvailable()。

To receive point clouds  接收點云

Pass an OnPointCloudAvailable() callback toTangoService_connectOnPointCloudAvailable().

將OnPointCloudAvailable()回調傳遞給TangoService_connectOnPointCloudAvailable()。

Status events 狀態事件

The TangoEvent notification callback signals important sensor status events,such as descriptions of error states. To receive aTangoEvent, attachanonTangoEvent() callback toTangoService_connectOnTangoEvent(). Thiscallback is called every time a Tango status event occurs.

TangoEvent is definedas:

TangoEvent通知回調指示重要的傳感器狀態事件,例如錯誤狀態的描述。要接收TangoEvent,請將onTangoEvent()回調附加到TangoService_connectOnTangoEvent()。每當發生探戈狀態事件時,都會調用此回調。

TangoEvent定義為:

typedef struct TangoEvent {  double timestamp;  TangoEventType type;  const char* event_key;  const char* event_value;} TangoEvent;

The timestamp indicates when the status event occurs, and can be compared totimestamps elsewhere in the Tango APIs.

時間戳指示狀態事件發生的時間,并且可以與Tango API中其他地方的時間戳進行比較。

The type is designated by theTangoEventTypeenumeration, and tells you which sensor triggered the event:

類型由TangoEventType枚舉指定,并告訴您哪個傳感器觸發了事件:

typedef enum {  TANGO_EVENT_UNKNOWN,  TANGO_EVENT_GENERAL,  TANGO_EVENT_FISHEYE_CAMERA,  TANGO_EVENT_COLOR_CAMERA,  TANGO_EVENT_IMU,  TANGO_EVENT_FEATURE_TRACKING,} TangoEventType;

The event_key and event_value describe the specific status event thatoccurred. Here is a table of possible keys:

event_key和event_value描述發生的特定狀態事件。這里是一個可能的鍵表:

Event keyExplanation
TangoServiceExceptionThe service has encountered an exception, and a text description is given in event_value.
FisheyeOverExposedThe fisheye image is overexposed with average pixel value event_value px.
FisheyeUnderExposedThe fisheye image is underexposed with average pixel value event_value px.
ColorOverExposedThe color image is overexposed with average pixel value event_value px.
ColorUnderExposedThe color image is underexposed with average pixel value event_value px.
TooFewFeaturesTrackedToo few features were tracked in the fisheye image; number of features tracked isevent_value.
UnknownDescription unknown.

Event keyExplanation
TangoServiceException服務遇到異常,并在event_value中給出了文本描述。
FisheyeOverExposed魚眼圖像曝光過度,平均像素值為event_value px。
FisheyeUnderExposed魚眼圖像曝光不足,平均像素值為event_value px。
ColorOverExposed彩色圖像與平均像素值event_value px曝光過度。
ColorUnderExposed彩色圖像曝光不足,平均像素值為event_value px。
TooFewFeaturesTracked魚眼圖像中跟蹤的功能太少;跟蹤的功能的數量是event_value。
Unknown描述未知

Based on the specific event description, you may choose to display instructionsfor users to correct the circumstances leading to the event. For example, ifyou are receivingColorUnderExposed, it is likely that the user is in an areawhere it is too dark for the Tango framework to function well.

基于特定的事件描述,您可以選擇顯示用戶指示以更正導致事件的情況。例如,如果您正在接收ColorUnderExposed,則很可能用戶處于太暗以至于Tango框架無法正常工作的區域。

To learn about another way to handle these events, see theUX Framework.

要了解另一種處理這些事件的方法,請參閱UX Framework。

Frames of Reference 參照系

When describing the position and orientation of something (for example, yourTango device), it is important to indicate theframe ofreference you are using to base your description on.

當描述某物(例如,Tango設備)的位置和方向時,重要的是指出您用來描述的參考框架。

To help understand frames of reference, consider the following: Saying "Mary isstanding three feet away" does not really tell you much. If you want to knowMary's position, you must also address the question "three feet from what?" Ifyou say "Mary is standing three feet in front of the entrance to the Statue ofLiberty," you can now establish Mary'sposition because you are using theStatue of Liberty as your frame of reference and you can measure the distanceand directon of Mary relative to the Statue.

為了幫助理解參考框架,考慮以下:說“瑪麗站在三英尺外”并不真正告訴你。如果你想知道瑪麗的位置,你還必須解決這個問題“離什么三英尺?如果你說“瑪麗站在自由女神像的入口前三英尺”,你現在可以確定瑪麗的位置,因為你使用自由女神像作為參考框架,你可以測量瑪麗的距離和方向相對于雕像。

But Mary isn't simply a point with a position in 3D space—she also has anorientation, which is described in terms of some type of rotation relativeto the frame of reference. In other Words, Mary, like all 3D objects, faces acertain direction. A full description of Mary's position and orientation (we callthis combination apose) in 3D space would be something like this:"Mary is standing three feet in front of the entrance to the Statue of Liberty,and she is directly facing it." Now you have provided information about herorientation. If Mary turned to her right, you could say "She is now rotated 90degrees away from the Statue." This would be another description of orientation.

但瑪麗不僅僅是一個在3D空間中的位置的點,她也具有取向,其根據相對于參考系的某種類型的旋轉來描述。換句話說,瑪麗像所有3D對象一樣,面向一定的方向。在3D空間中對瑪麗的位置和方向(我們稱之為組合姿勢)的完整描述將是這樣的:“瑪麗站在自由女神像的入口前三英尺,她直接面對它?,F在你提供了她的方向的信息。如果瑪麗轉向她的右邊,你可以說“她現在旋轉了90度遠離雕像。這將是對方向的另一種描述。

So how does all of this relate to a Tango device? In order to performmotion tracking, a device reports its pose (position and orientation) relativeto its chosen frame of reference, which is fixed in 3D space. For example, thedevice might say "from the place that I first started motion tracking, I am nowthree feet forward and one foot up, and I have rotated 30 degrees to theright." By doing this, the device has told you its position using meaningfuldirections: three feet forward and one foot up from its original startingposition. It has also told you about a change in its orientation: rotated 30degrees to the right relative to its starting position.

那么,所有這一切都與Tango設備有關?為了執行運動跟蹤,設備相對于其選定的參考系來報告其姿態(位置和取向),其在3D空間中是固定的。例如,設備可能說“從我第一次開始運動跟蹤的地方,我現在向前三英尺,一英尺,我已經向右旋轉了30度。通過這樣做,設備已經告訴你它的位置使用有意義的方向:三英尺前,一英尺從其原始的起始位置。它也告訴你一個方向的變化:相對于它的起始位置向右旋轉30度。

To set things up for motion tracking, you must do the following:

要為運動跟蹤設置內容,您必須執行以下操作:

Choose a base frame. This is the thing you will be measuring from. As mentioned above, it is fixed in 3D space, like the Statue of Liberty in our example above. Example: theCOORDINATE_FRAME_START_OF_SERVICE frame.

Choose a target frame. This is the thing you will be measuringto. For motion tracking this is usuallyCOORDINATE_FRAME_DEVICE and represents your device's pose at any given instant as it moves through 3D space. The pose of the target frame changes as your device moves, and is measured against the base frame (which never changes), up to 100 times per second. This constant stream of measurements creates your motion track.

        1、選擇基準框架。這是你將要測量的東西。如上所述,它固定在3D空間中,像我們上面的例子中的自由女神像。示例:COORDINATE_FRAME_START_OF_SERVICE框架。

         2、選擇目標幀。這是你將要測量的東西。對于運動跟蹤,這通常是COORDINATE_FRAME_DEVICE,并且在移動到3D空間時表示您設備在任何給定時刻的姿勢。目標框架的姿態隨著設備移動而改變,并且相對于基本框架(從不改變)進行測量,最多每秒100次。這種恒定的測量流創建您的運動軌跡。

The numerical measurements of the pose of the target frame relative to the baseframe at any given instant answer the question: "What is the device's positionand orientation relative to its base frame of reference?"

在任何給定時刻,目標幀相對于基本幀的姿態的數值測量回答了問題:“設備相對于其基準參考幀的位置和方向是什么?

In the next section, we discuss the use of start-of-service frame, areadescription frame, and device pose frame pairs for motion tracking. For certainapplications, you may need to choose a frame pair that will enable you to makeprecise alignments of data sources from device components. We discuss thesetypes of frame pairs later in this topic.

在下一節中,我們討論使用服務啟動幀,區域描述幀和設備姿態幀對用于運動跟蹤。對于某些應用程序,您可能需要選擇一個幀對,使您能夠對來自設備組件的數據源進行精確對齊。我們將在本主題后面討論這些類型的幀對。

To learn more about the coordinate systems used for frames of reference, seeCoordinate System Conventions.

要了解有關用于參考系的坐標系的更多信息,請參閱坐標系約定。

Coordinate frames for motion tracking 用于運動跟蹤的坐標幀

The Tango APIs give you various frame pair options for motion tracking:

Tango API為運動跟蹤提供了各種幀對選項:

Target FrameBase Frame
COORDINATE_FRAME_DEVICECOORDINATE_FRAME_START_OF_SERVICE
COORDINATE_FRAME_DEVICECOORDINATE_FRAME_AREA_DESCRIPTION
COORDINATE_FRAME_START_OF_SERVICECOORDINATE_FRAME_AREA_DESCRIPTION

Let's consider a common use case:

讓我們考慮一個常見的用例:

Goal: Your app controls a camera in a fully virtual environment. You wantthe device to always calculate its pose relative to where it was when theTango service started.

目標:您的應用程式會在完全虛擬的環境中控制相機。您希望設備始終計算其相對于Tango服務啟動時的位置的姿勢。

Solution: For the target frame, choose COORDINATE_FRAME_DEVICE. For thebase frame, chooseCOORDINATE_FRAME_START_OF_SERVICE.

解決方案:對于目標幀,選擇COORDINATE_FRAME_DEVICE。對于基準框架,請選擇COORDINATE_FRAME_START_OF_SERVICE。

Here is the frame pair used in our example project titledcpp_hello_motion_tracking_example:

下面是我們的示例項目中使用的框架對,名為cpp_hello_motion_tracking_example:

 TangoCoordinateFramePair pair;   pair.base = TANGO_COORDINATE_FRAME_START_OF_SERVICE;   pair.target = TANGO_COORDINATE_FRAME_DEVICE;   if (TangoService_connectOnPoseAvailable(1, &pair, onPoseAvailable) !=       TANGO_SUCCESS) {     LOGE("TangoHandler::OnResume, connectOnPoseAvailable error.");     std::exit(EXIT_SUCCESS);   }

Let's look at the details of individual frame pairs.

讓我們來看看各個幀對的細節。


Target FrameBase Frame
COORDINATE_FRAME_DEVICECOORDINATE_FRAME_START_OF_SERVICE

This frame pair provides the pose of the device relative to when theTango service first initialized successfully. This mode accumulates themovement of the device over time since the service started. The service canalso detect if there is a motion tracking failure. During this period, thesystem reports an invalid pose. If TangoService_resetMotionTracking() iscalled or auto-reset is enabled in the service configuration, the systemattempts to re-initialize tracking. After successful re-initialization, itmakes a best effort attempt to recover the last known good pose of the devicerelative to the start of service frame and pick up where it left off. For moreinformation, seeLifecycle of pose status.This frame pair does not include drift correction or localization. If yourapplication does not use drift correction or localization, you can lowerprocessing requirements by disabling area learning mode and not loading an ADF.

此框架對提供了設備相對于首次成功初始化Tango服務時的姿態。此模式累積了從服務啟動以來設備隨時間的移動。服務還檢測是否存在運動跟蹤故障。在此期間,系統報告無效的姿勢。如果在服務配置中啟用了TangoService_resetMotionTracking()或自動重置,系統將嘗試重新初始化跟蹤。在成功重新初始化之后,最大努力嘗試恢復裝置的最后已知的良好姿態到服務幀的開始并且拾取它停止的位置。有關更多信息,請參閱位姿狀態的生命周期。此幀對不包括漂移校正或定位。如果您的應用程序未使用漂移校正或本地化,則可以通過禁用區域學習模式而不加載ADF來降低處理要求。


Target FrameBase Frame
COORDINATE_FRAME_DEVICECOORDINATE_FRAME_AREA_DESCRIPTION

This frame pair provides the pose of the device, including corrections,relative to the loaded area description's origin. It requires that area learningmode is turned on or a previously created ADF is loaded. If you turn onlearning mode without loading an ADF, the origin of the area descriptionbase frame is initially the same asstart of service. If you load an ADFwith or without learning mode, the origin of thearea description baseframe is the origin stored in the ADF, and you will receive data only after thedevice has localized. Depending on your configuration settings, this mode isnot always available. For more information, seeUsing Learning Mode and loadedArea Description Files.If you need to use motion tracking before theCOORDINATE_FRAME_DEVICE toCOORDINATE_FRAME_AREA_DESCRIPTION frame pair becomes valid, you can use theCOORDINATE_FRAME_START_OF_SERVICE base frame in the interim.

該幀對提供了設備的姿勢,包括相對于加載區域描述的原點的校正。它要求打開區域學習模式或加載先前創建的ADF。如果您在沒有加載ADF的情況下啟用學習模式,則區域描述庫框架的起點最初與啟動服務相同。如果加載ADF或不使用學習模式,則區域描述基礎框架的原點是存儲在ADF中的原點,并且只有在設備已本地化之后才會接收數據。根據您的配置設置,此模式不會始終可用。有關詳細信息,請參閱使用學習模式和loadedArea描述文件。如果您需要在COORDINATE_FRAME_DEVICE到COORDINATE_FRAME_AREA_DESCRIPTION幀對有效之前使用運動跟蹤,則可以在臨時中使用COORDINATE_FRAME_START_OF_SERVICE基本框架。

Note: Drift corrections and localization events cause jumps in the pose. Toavoid these jumps, use theCOORDINATE_FRAME_START_OF_SERVICE base frame todrive the user-facing elements in your application and incorporate the ADFdriven corrections usingCOORDINATE_FRAME_START_OF_SERVICE toCOORDINATE_FRAME_AREA_DESCRIPTION update callbacks.注意:漂移校正和定位事件導致姿勢的跳躍。要避免這些跳轉,請使用COORDINATE_FRAME_START_OF_SERVICE基本框架來驅動應用程序中的面向用戶的元素,并使用COORDINATE_FRAME_START_OF_SERVICE到COORDINATE_FRAME_AREA_DESCRIPTION更新回調來合并ADFdriven更正。

For pairs using the COORDINATE_FRAME_DEVICE target frame, updates areavailable at the pose estimation rate supported by the device.

對于使用COORDINATE_FRAME_DEVICE目標幀的對,以設備支持的姿態估計速率更新區域可用。


Target FrameBase Frame
COORDINATE_FRAME_START_OF_SERVICECOORDINATE_FRAME_AREA_DESCRIPTION

This frame pair provides updates only when a localization event or a driftcorrection occurs. This requires that area learning mode is turned on or apreviously created ADF is loaded. If an ADF is loaded, the origin of theareadescription base frame is the origin stored in the ADF. This isolates theadjustments to the pose of the device from the incremental frame-to-framemotion, allowing you to decide when and how to incorporate the pose adjustmentsin your application to minimize disruption to the user experience.

該幀對僅在發生定位事件或漂移校正時提供更新。這需要打開區域學習模式或加載先前創建的ADF。如果加載了ADF,則表示基準幀的原點是存儲在ADF中的原點。這會將調整與從增量幀到幀移動的設備的姿勢隔離,從而允許您決定何時以及如何在應用程序中結合姿勢調整以最小化用戶體驗的中斷。

Coordinate frames for component alignment  組件對齊的坐標系

Target FrameBase Frame
COORDINATE_FRAME_DEVICECOORDINATE_FRAME_IMU
COORDINATE_FRAME_CAMERA_COLORCOORDINATE_FRAME_IMU
COORDINATE_FRAME_CAMERA_DEPTHCOORDINATE_FRAME_IMU
COORDINATE_FRAME_CAMERA_FISHEYECOORDINATE_FRAME_IMU

Some applications need to align multiple data sources, such as the data fromthe color and depth cameras. You can pair theCOORDINATE_FRAME_IMUbase frame with one of thecomponent target frames for these scenarios:

You want to query the relative offsets of the individual components to the IMU frame of reference without knowing the layout of the specific device.

You want the virtual image from the rendering camera to align with the center of the display.

一些應用程序需要對齊多個數據源,例如來自彩色和深度相機的數據。您可以將COORDINATE_FRAME_IMUbase框架與這些方案的某個組件目標框架配對:    

       1、您希望查詢單個組件相對于IMU參考系的相對偏移,而不需要知道特定器件的布局。

        2、您希望渲染攝像機的虛擬圖像與顯示中心對齊。

Combined with the motion tracking coordinate frames and timestamps on the data,these offsets give you a more complete understanding of the various sensorinputs in both space and time. This is necessary for aligning and compositingmultiple data sources together.

結合運動跟蹤坐標系和數據上的時間戳,這些偏移使您能夠更全面地了解空間和時間中的各種傳感器輸入。這對于將多個數據源對齊和合成在一起是必要的。

Note: The relative offsets between two components are sometimes referred to astheextrinsic parameters.注意:兩個組件之間的相對偏移有時稱為外部參數。

Since devices are designed to be mechanically rigid, these offsets are notexpected to change and updates will not occur in the API. However, devices varyin how their components are spaced. Updating extrinsic parameters over time isnot currently supported by the Tango APIs. These values are generatedeither from a one-time factory calibration or from the manufacturer'smechanical design files. Applications that require extremely tight requirementsfor the extrinsic parameters should consider implementing their own calibrationprocedure which can be performed by the end user.

由于設備設計為機械剛性,這些偏移量不會發生更改,因此API中不會進行更新。然而,設備不同其部件間隔如何。Tango API目前不支持隨時間更新外部參數。這些值是從一次性工廠校準或從制造商的機械設計文件生成的。對外部參數要求非常嚴格要求的應用應考慮實施自己的校準過程,該過程可由最終用戶執行。

The COORDINATE_FRAME_IMU base frame provides a common reference point for allof the internal components in the device. The origin of this base frame doesnot necessarily correspond to any one particular component and may differbetween devices. Like other Android sensors, the axis of the device coordinateframe is aligned with the natural (default) orientation of the device asdefined by the manufacturer. The manufacturer-defined natural orientation ofthe device may not match the desired orientation of your app. For maximumfuture compatibility, do not assume a Tango-compliant device has anatural orientation that is either landscape or portrait. Instead, use theAndroidgetRotation() method to determine screen rotation, and then use theAndroidremapCoordinateSystem() method to map sensor coordinates to screencoordinates. For more general information about sensors, see the Androiddocumentation on thesensor coordinatesystem.For a more detailed discussion of issues surrounding deviceorientation, seethis Android Developers Blogpost.

COORDINATE_FRAME_IMU基本框架提供了用于允許設備中的內部組件的公共參考點。該基本框架的原點不一定對應于任何一個特定組件,并且可以在設備之間不同。與其他Android傳感器一樣,設備坐標系的軸與制造商定義的設備的自然(默認)方向對齊。設備的制造商定義的自然方向可能與您應用的所需方向不一致。為了達到最大兼容性,不要假定Tango兼容設備具有橫向或縱向的自然方向。相反,使用Android getRotation()方法來確定屏幕旋轉,然后使用Android remapCoordinateSystem()方法將傳感器坐標映射到屏幕坐標。有關傳感器的更多一般信息,請參閱傳感器坐標系統上的Androiddocumentation。有關設備定位問題的更詳細的討論,請參閱此Android開發人員Blogpost。

The component offsets are static and should only need to be queried once.

組件偏移是靜態的,應該只需要查詢一次。

Note: The unit of measurement for coordinate frame pairs is meters.注意:坐標框架對的測量單位為米。


發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
亚洲香蕉成人av网站在线观看_欧美精品成人91久久久久久久_久久久久久久久久久亚洲_热久久视久久精品18亚洲精品_国产精自产拍久久久久久_亚洲色图国产精品_91精品国产网站_中文字幕欧美日韩精品_国产精品久久久久久亚洲调教_国产精品久久一区_性夜试看影院91社区_97在线观看视频国产_68精品久久久久久欧美_欧美精品在线观看_国产精品一区二区久久精品_欧美老女人bb
亚洲在线一区二区| 成人在线观看视频网站| 国产精品久久久久高潮| 国产亚洲一区精品| 日韩亚洲欧美成人| 欧美老妇交乱视频| 欧美自拍视频在线| 日韩av一区在线| 国产在线观看不卡| 国产午夜精品美女视频明星a级| 欧美最近摘花xxxx摘花| 精品美女久久久久久免费| 国产精品视频内| 91精品久久久久久综合乱菊| 日韩成人中文字幕在线观看| 中文字幕一区二区三区电影| 日韩精品在线观| 一本一道久久a久久精品逆3p| 欧美最顶级的aⅴ艳星| 亚洲国产美女精品久久久久∴| 亚洲第一男人av| 丁香五六月婷婷久久激情| 国产精自产拍久久久久久| 日韩美女毛茸茸| 久久天天躁狠狠躁夜夜躁2014| 亚洲第一区第一页| 欧美精品激情在线观看| 欧美最顶级的aⅴ艳星| 91在线高清视频| 中文字幕视频一区二区在线有码| 伊人久久久久久久久久久久久| 国产成人亚洲精品| 欧美理论电影在线观看| 九九视频这里只有精品| 国产一区二区动漫| 91亚洲国产精品| 91久久久久久国产精品| 97视频免费观看| 国产区精品在线观看| 久久99久久亚洲国产| 丰满岳妇乱一区二区三区| 狠狠躁夜夜躁人人躁婷婷91| 成人国产精品一区二区| 美女黄色丝袜一区| 亚洲欧美精品一区| 欧美超级免费视 在线| 91免费高清视频| 久久av中文字幕| 精品中文字幕视频| 成人福利网站在线观看11| 中文字幕亚洲一区| 亚洲最大的av网站| 久久久精品久久久| 亚洲www永久成人夜色| 在线视频免费一区二区| 亚洲成av人片在线观看香蕉| 欧美在线欧美在线| 草民午夜欧美限制a级福利片| 国产成人综合亚洲| 国产黑人绿帽在线第一区| 精品综合久久久久久97| 亚洲精品日韩激情在线电影| 高跟丝袜一区二区三区| 91中文精品字幕在线视频| 欧美激情亚洲国产| 欧美激情成人在线视频| 国产精品成人免费视频| 91精品国产高清久久久久久| 日韩高清a**址| 最近2019年日本中文免费字幕| 久久久久北条麻妃免费看| 欧美日韩国产精品一区二区不卡中文| 8050国产精品久久久久久| 91精品视频专区| 狠狠躁天天躁日日躁欧美| 久久综合久久88| 亚洲第一区中文字幕| 亚洲欧美激情精品一区二区| 这里只有精品久久| 国产精品视频免费观看www| 久久精品亚洲热| 欧美多人乱p欧美4p久久| 日韩av在线高清| 超在线视频97| 狠狠躁夜夜躁人人爽天天天天97| 精品毛片三在线观看| 欧美日韩亚洲网| 国产精品久久久91| 国产精品久久久久久影视| 国产精品扒开腿做爽爽爽视频| 亚洲人a成www在线影院| 久久精品视频网站| 亚洲老板91色精品久久| 国产精品福利久久久| 国产日韩欧美在线观看| 久久久国产视频| 欧美另类老肥妇| 精品视频久久久久久| 91欧美精品成人综合在线观看| 日韩av在线直播| 日韩中文字幕免费看| 亚洲综合精品一区二区| 欧美日韩激情美女| 国产精品一区二区在线| 日本精品久久久久影院| 久久久这里只有精品视频| 中文字幕欧美在线| 色偷偷偷综合中文字幕;dd| 色妞色视频一区二区三区四区| 色综合老司机第九色激情| 亚洲三级 欧美三级| 日韩美女免费线视频| 91精品国产沙发| 日本精品久久电影| 国产亚洲欧美日韩一区二区| 亚洲2020天天堂在线观看| 久久精品精品电影网| 欧美午夜久久久| 亚洲97在线观看| 亚洲影院色在线观看免费| 国精产品一区一区三区有限在线| 亚洲乱码一区二区| 国产日韩欧美日韩大片| 欧美亚洲在线观看| 午夜精品99久久免费| 日韩一级裸体免费视频| 国产成人精品视频在线| 最近中文字幕mv在线一区二区三区四区| 亚洲女人天堂av| 日韩成人中文字幕| 精品国产精品自拍| 久久久久北条麻妃免费看| 亚洲美女精品成人在线视频| 另类视频在线观看| 中文字幕亚洲情99在线| 久久久久久成人精品| 欧美大人香蕉在线| 亚洲国产精品推荐| 欧美成人精品三级在线观看| 亚洲男人天堂手机在线| 九九视频这里只有精品| 波霸ol色综合久久| 亚洲天堂一区二区三区| 91欧美精品午夜性色福利在线| 国产成人一区二区三区| 国产日韩精品一区二区| 国产精品十八以下禁看| 成人黄色大片在线免费观看| 欧美日韩国产成人高清视频| 久久久人成影片一区二区三区| 欧美丝袜一区二区三区| 亚洲精品大尺度| 国产日韩欧美综合| 中文字幕欧美视频在线| 久久伊人91精品综合网站| 日韩av中文字幕在线| 欧美激情va永久在线播放| 国产亚洲人成网站在线观看| 日韩电影第一页| 日韩中文字幕在线视频| 亚洲理论在线a中文字幕| 欧美理论在线观看| 精品亚洲永久免费精品| 亚洲视频国产视频|