• apritag 定位记录 C++ opencv 3.4.5


    参考:2021-06-23 基于AprilTag的位姿估计,原理,完整代码(相机坐标系、世界坐标系) - 简书

    Apriltag使用之二:方位估计(定位)_arczee的博客-CSDN博客_apriltag位姿估计

    https://www.cnblogs.com/brt2/p/13547075.html

    根据Apriltag进行角度和距离检测_卓晴的博客-CSDN博客_apriltag

    1.AprilTag概述

    AprilTag是一种视觉基准系统,类似的视觉基准标签还有:ArUco、ARTag、TopoTag等。而AprilTag因其对旋转、光照、模糊具有良好的鲁棒性,因此被广泛应用于目标跟踪、室内定位、增强现实等领域。本文主要展示使用AprilTag标签,分别获取AprilTag标签在相机坐标系下和世界坐标系下位置。

    1.1 相机内参

    相机内参存放于K中,例:K = (cv::Mat_(3, 3) << 471.33715, 0.0, 437.19435, 0.0, 472.0701, 275.1862, 0.0, 0.0, 1.0);则横向焦距fx=471.33715,竖向焦距为437.19435,偏移量为472和275.

    生成AprilTag不同家族标签图像教程

    2.AprilTag 相关资料

    AprilTag论文三部曲:

    1. AprilTag: A robust and flexible visual fiducial system. --2011

    2. AprilTag 2: Efficient and robust fiducial detection. --2016

    3. Flexible Layouts for Fiducial Tags. --2019(AprilTag3)

    一些应用AprilTag的论文:

    1. UAV Autonomous Landing Technology Based on AprilTags Vision Positioning Algorithm.
    2. Pose Estimation for Multicopters Based on Monocular Vision and AprilTag.
    3. Image Digital Zoom Based Single Target Apriltag Recognition Algorithm in Large Scale Changes on the Distance.

    上述资料打包下载地址:https://cloud.189.cn/web/share?code=JRJRJbNfe2ay(访问码:7l96)

    AprilTag论文源码地址:https://github.com/AprilRobotics/apriltag

    官网:https://april.eecs.umich.edu/software/apriltag.html
    git仓库地址:https://github.com/AprilRobotics/apriltag

    2.1 参数配置

    td_ = apriltag_detector_create();
    tf_ = tag36h11_create();//tag种类
    apriltag_detector_add_family_bits(td_, tf_, 3);
    td_->quad_decimate = 1; // Decimate input image by factor //降低图片分辨率,经实验该参数可大幅提升检测时间且不影响检测结果
    td_->quad_sigma = 0; // No blur //高斯降噪,如果噪声较大的图片,可使用该参数
    td_->nthreads = 4; // 2 treads provided //使用几个线程来运行
    td_->debug = 0; // No debuging output
    td_->refine_edges = 1;
    info_.tagsize = APRILTAG_DETECTED_SIZE;//TAG大小 

    2.2检测结果含义

    3.AprilTag 原理

    检测标签

    0.高斯滤波 (解决边缘遮挡)

    1.前端检测二维码(像素梯度,直线拟合,查找矩形,家族对比)

    实现原理
    自适应阈值
    将输入图像(a)灰度化处理为灰度图(b)。采用自适应阈值方法,自适应阈值的主要思想就是在像素领域内寻找一个合理的阈值进行分割,选取灰度均值和中值都是常见的手法。作者先将图像划分为4x4像素的图块,求出每个图块的灰度最大值和最小值,然后对所有图块计算的最大最小灰度值进行一个3邻域最大最小滤波处理,将滤波后的最大最小均值(max+min)/2作为分块区域的阈值,为每个像素分配白色或黑色的值形成二值图。分块的目的主要是增加鲁棒性,区域的特征总比单一像素的更加稳定,减少随机噪声的干扰,同时提升计算效率。

    连续边界分割
    对黑白二值化图片,找到可能形成标签边界的边缘(c)。一种简单的方法是识别具有相反颜色邻居的边缘像素,然后形成边缘像素的连接组。但是,当标签边界之间的白色空间仅接近单个像素宽时,这种方法会失败。如果两个标签边界被错误地合并,则将不会检测到标签。需要根据产生的黑色和白色成分的身份对边缘进行分割。使用联合查找算法对亮像素和暗像素的相连成分进行分段,得到每个成分唯一的ID。

    拟合四边形
    将这些无序的边界点簇拟合成四边形(d)。四边形拟合步骤输出一组候选四边形进行解码。这其中最难的是找四边形的四个顶点,如果对于规则的正方形或者矩形来说,其实是相对比较好做的。但是存在tag存在变形。首先对无序的轮廓点按照对重心的角度进行排序。有了排序的轮廓点,然后就是按部就班的按照顺序选取距离中心点一定范围内的点进行直线拟合,不断迭代索引,计算每条直线的误差总和;对误差总和进行一个低通滤波,使系统更加鲁棒,然后选取误差总和最大的四条直线对应的角点索引作为四边形角点。然后取角点间的点拟合直线,求得四条直线的角点作为Tag的顶点。四边形检测器可以在环境中正确找到许多四边形结构,包括镜面反射,开关和单个标签像素。解码步骤将四边形的内容与已知代码字进行比较,过滤掉错误的候选四边形,输出有效的标签检测(e)。

    1.检测是否使用模糊参数

    2.apriltag_quad_thresh检测矩形->step 1. threshold the image, creating the edge image.->seg img

    1. zarray_t *apriltag_quad_thresh(apriltag_detector_t *td, image_u8_t *im)
    2. {
    3. // step 1. threshold the image, creating the edge image.
    4. int w = im->width, h = im->height;
    5. image_u8_t *threshim = threshold(td, im); // 线段检测
    6. int ts = threshim->stride;
    7. if (td->debug)
    8. image_u8_write_pnm(threshim, "debug_threshold.pnm");
    9. // step 2. find connected components.
    10. unionfind_t *uf = connected_components(td, threshim, w, h, ts);
    11. // make segmentation image.//分割图片
    12. zarray_t *clusters = gradient_clusters(td, threshim, w, h, ts, uf);
    13. // step 3. process each connected component.深度优先搜索(深度为四的DFS确定四边形)
    14. zarray_t *quads = fit_quads(td, w, h, clusters, im);
    15. }

    解码:8×8判断黑白,如果为黑rcode<<1

     

    在确定的四边形中明确点阵坐标,在灰度图像中提取点阵的最外围一圈的像素的平均值Value1,再提取点阵次外外围一圈的像素的平均值Value2,根据AprilTag本身的二维码库的设计,四边形最外一层的虽有点的灰度值都是黑色的,而此外层的灰度值是黑色与白色混合,因此在同一光照环境下,Value1和Value2有明显的阈值分界线,确定阈值为Value1与Value2的均值,重新遍历整个点阵所有点坐标的像素值高于阈值的部分编码为0,低于阈值的部分编码为1,见图???。如此以来,从第一行开始进行编码,直至整个点阵被编码完成,将编码排列起来会得到一串二进制码,该二进制码的长度由具体的编码方式决定(分为36,25,16三种),每个四边形都能得到一串二进制码,该码表示该状态下二维码的编码,错误的四边形往往会生成错误的编码,错误编码无法在阈值范围内匹配到相应的ID,故可以轻松的排除错误编码的四边形,对得到的四边形进行编码并进行匹配校验是重要的一个过程。

    进一步确定编码是否可靠,要与已知的编码库进行匹配,由于观察到的编码存在旋转的情况,所以应先将得到的编码进行三次90°旋转共得到四个方向上的编码,再逐一与编码库进行比较,求解编码之间的汉明距离(Hamming Distance,汉明距离是表示两个二进制编码之间的相似程度的一种距离,汉明距离是两个长二进制串之间位数不相同的个数。当观察编码与已知库里的某一编码的汉明距离小于给定的阈值时(一般为2),就确定该观察编码的ID为与之匹配的编码库里的ID,并记录下该汉明距离,若编码库并没有一个与之匹配就确定该观察编码有误,则舍弃该编码对应的四边形。

    内部遮挡优化:Step 2. Decode tags from each quad.

    apriltag_detector_detect->quad_decode->quad_decode_task->quad_decode->quick_decode_codeword:设置偏差为5,能解决内部遮挡无法解决边界遮挡

    1. // qd might be null if detector_add_family_bits() failed
    2. for (int ridx = 0; qd != NULL && ridx < 4; ridx++)
    3. {
    4. for (int bucket = rcode % qd->nentries;
    5. qd->entries[bucket].rcode != UINT64_MAX;
    6. bucket = (bucket + 1) % qd->nentries)
    7. {
    8. if (abs(qd->entries[bucket].rcode - rcode) < 5)
    9. {
    10. *entry = qd->entries[bucket];
    11. entry->rotation = ridx;
    12. return;
    13. }
    14. }
    15. rcode = rotate90(rcode, tf->nbits);
    16. }

    经过上述的筛选与验证,至此观察编码的ID以及编码的旋转已经可以确定,便可计算该二维码的其他参数,包括二维码相对于原二维码的旋转方位,观察二维码与匹配二维码的相似程度,其中主要为二维码的单线性变换矩阵(Homography Matrix)},在计算机领域中的针孔相机模型下,同一个张图形在不同空间内都有相关性,单线性变换就是将这个图片在该空间下映射至另一个与之对应的空间下,单线性变化矩阵就是这种映射矩阵且可分解出寻转矩阵与平移矩阵,根据上一个步骤求出的四个顶点的坐标即可求出相对于原始坐标的单线性变换矩阵,利用单线性变换矩阵结合多相机的信息,可以得到二维码的相关姿势信息。分解该矩阵就可求出旋转变换向量rvec以及平移变换向量tvec,对于得到的旋转向量,可以根据该向量求得二维码的正确姿势。

    2.后端正交迭代法计算求解RT

    matlab验证:位姿测量 | 正交迭代(OI)算法的原理及其MATLAB实现_正交迭代算法_lovetaozibaby的博客-CSDN博客 

     程序源码:

    1. double orthogonal_iteration(matd_t **v, matd_t **p, matd_t **t, matd_t **R, int n_points, int n_steps)
    2. {
    3. /*对特征点归一化处理,求解P矩阵,
    4. P值为固定值不发生变化*/
    5. matd_t *p_mean = matd_create(3, 1);
    6. for (int i = 0; i < n_points; i++)
    7. {
    8. matd_add_inplace(p_mean, p[i]); // 将p矩阵赋值给p_mean
    9. }
    10. matd_scale_inplace(p_mean, 1.0 / n_points); // P_mean与1/n相乘
    11. matd_t **p_res = malloc(sizeof(matd_t *) * n_points);
    12. for (int i = 0; i < n_points; i++)
    13. {
    14. p_res[i] = matd_op("M-M", p[i], p_mean); // 相减p-p_mean
    15. }
    16. /*求解当前帧的最优位置参数T的固定部分*/
    17. matd_t **F = malloc(sizeof(matd_t *) * n_points);
    18. matd_t *avg_F = matd_create(3, 3);
    19. for (int i = 0; i < n_points; i++)
    20. {
    21. F[i] = calculate_F(v[i]); // Fi 表示沿实现方向的投影矩阵
    22. matd_add_inplace(avg_F, F[i]);
    23. }
    24. matd_scale_inplace(avg_F, 1.0 / n_points); //
    25. // 求解M矩阵 // Compute M1_inv.
    26. matd_t *I3 = matd_identity(3); // 单位阵
    27. matd_t *M1 = matd_subtract(I3, avg_F); // 单位阵-F
    28. matd_t *M1_inv = matd_inverse(M1); // M1的逆矩阵(I-1/n(Vi))(-1)
    29. matd_destroy(avg_F);
    30. matd_destroy(M1);
    31. double prev_error = HUGE_VAL; // 无穷
    32. double the = 37.0 / 180.0 * 3.14;
    33. double hmin = 0.1;
    34. double hmax = 0.4;
    35. // Iterate.
    36. for (int i = 0; i < n_steps; i++) // 迭代次数500
    37. {
    38. // Calculate translation.求解最优参数T
    39. matd_t *M2 = matd_create(3, 1);
    40. for (int j = 0; j < n_points; j++)
    41. {
    42. matd_t *M2_update = matd_op("(M - M)*M*M", F[j], I3, *R, p[j]); //(V-I3)*R*P
    43. matd_add_inplace(M2, M2_update); // M2为四个点得累加值
    44. matd_destroy(M2_update);
    45. }
    46. matd_scale_inplace(M2, 1.0 / n_points); // M2*(1/n)
    47. matd_destroy(*t);
    48. *t = matd_multiply(M1_inv, M2); // 求解当前帧的最优位置参数T,t为最优参数
    49. // printf("************z %f***********\n", matd_get(*t, 0, 1));
    50. /*if (matd_get(*t, 0, 1) < -0.9) // z方向限制 高度
    51. matd_put(*t, 0, 1, -0.9);
    52. else if (matd_get(*t, 0, 1) > 0.2)
    53. matd_put(*t, 0, 1, 0.2); */
    54. /*if (matd_get(*t, 0, 1) < -tan(the) * (matd_get(*t, 0, 2) - hmin / sin(the)))
    55. {
    56. matd_put(*t, 0, 1, -tan(the) * (matd_get(*t, 0, 2) - hmin / sin(the)));
    57. }
    58. else if (matd_get(*t, 0, 1) > -tan(the) * (matd_get(*t, 0, 2) - hmax / sin(the)))
    59. {
    60. matd_put(*t, 0, 1, -tan(the) * (matd_get(*t, 0, 2) - hmax / sin(the)));
    61. }*/
    62. /*if (matd_get(*t, 0, 0) < -1.0) // x方向限制
    63. matd_put(*t, 0, 0, -0.0);
    64. else if (matd_get(*t, 0, 0) > 1.0)
    65. matd_put(*t, 0, 0, 0.0);*/
    66. matd_destroy(M2);
    67. // Calculate rotation.将q归一化处理
    68. matd_t **q = malloc(sizeof(matd_t *) * n_points);
    69. matd_t *q_mean = matd_create(3, 1);
    70. for (int j = 0; j < n_points; j++)
    71. {
    72. q[j] = matd_op("M*(M*M+M)", F[j], *R, p[j], *t); // 求解qi=V(R*P+T)=(R*P+t)
    73. matd_add_inplace(q_mean, q[j]); // v*q_mean=v*Q平均
    74. }
    75. matd_scale_inplace(q_mean, 1.0 / n_points);
    76. matd_t *M3 = matd_create(3, 3);
    77. for (int j = 0; j < n_points; j++)
    78. {
    79. matd_t *M3_update = matd_op("(M-M)*M'", q[j], q_mean, p_res[j]); // M=P'Qi'(Rk) V*Qi'=V*(q-q_mean)
    80. matd_add_inplace(M3, M3_update);
    81. matd_destroy(M3_update);
    82. }
    83. matd_svd_t M3_svd = matd_svd(M3); // 奇异值分解
    84. matd_destroy(M3);
    85. matd_destroy(*R);
    86. *R = matd_op("M*M'", M3_svd.U, M3_svd.V); // R=VU(T)
    87. /*(1)行列式的意义
    88. 单位面积/单位体积缩放或者拉升的比例
    89. 线性变换对空间压缩或者拉升的比例
    90. 线性变换会对空间进行挤压或者拉伸,我们通过追踪空间基向量的变换,来查看单位面积(二维)/单位体积(三维)的面积或者体积缩放比例,而这个缩放比例,对应的就是行列式的值。
    91. 二维空间中行列式的值表示平行四边形的面积, 三维空间中行列式的值表示平行六面体的体积*/
    92. double R_det = matd_det(*R);
    93. // 行列式小于0,这样的结果就是空间取向发生了变换,即将整个空间翻转了一遍
    94. if (R_det < 0)
    95. {
    96. matd_put(*R, 0, 2, -matd_get(*R, 0, 2)); // 行列式值进行翻转
    97. matd_put(*R, 1, 2, -matd_get(*R, 1, 2));
    98. matd_put(*R, 2, 2, -matd_get(*R, 2, 2));
    99. }
    100. matd_destroy(M3_svd.U);
    101. matd_destroy(M3_svd.S);
    102. matd_destroy(M3_svd.V);
    103. matd_destroy(q_mean);
    104. for (int j = 0; j < n_points; j++)
    105. {
    106. matd_destroy(q[j]);
    107. }
    108. double error = 0;
    109. for (int j = 0; j < 4; j++)
    110. {
    111. matd_t *err_vec = matd_op("(M-M)(MM+M)", I3, F[j], *R, p[j], *t); // 误差函数=(I-F)(R*P+t),误差项 F=V
    112. error += matd_to_double(matd_op("M'M", err_vec, err_vec)); // err^2 从几何角度看,点积是两个向量的长度与它们夹角余弦的积
    113. matd_destroy(err_vec);
    114. }
    115. prev_error = error;
    116. free(q);
    117. }
    118. matd_destroy(I3);
    119. matd_destroy(M1_inv);
    120. for (int i = 0; i < n_points; i++)
    121. {
    122. matd_destroy(p_res[i]);
    123. matd_destroy(F[i]);
    124. }
    125. free(p_res);
    126. free(F);
    127. matd_destroy(p_mean);
    128. return prev_error;
    129. }
    1. Apriltag中计算的Homography
      配置参数,调用模型检测,在进行apriltag码检测时,如果检测到会一并计算出图像上apriltag码四个角点对应的homography矩阵,这个homography将这些点映射到到标准的(-1,1),(1,1),(1,-1),(-1,-1)顶点。

    2. 单应性变换与外参估计,通过给定相机的内参K,就可以利用homography对相机相对于apriltag码的方位进行估计。下面通过分析Apriltag的源码,阐述一下利用homography估计相机方位的方法。

      以TAG右上角点坐标为例,在TAG坐标系中,X=Y,则:

    3.  

    4. 位置与角度计算                           

    5. 距离估计:根据投影关系,可以根据检测到的Apriltag的对角线的长度与摄像头的焦距的几何关系,可以获得物体距离摄像头的距离。

    3.解算位姿及优化

    现在我们已知了由世界坐标系至相机坐标系的旋转矩阵和平移矩阵,现在可以根据这两个矩阵,估计相机在世界坐标系下的位置相机坐标系下标签的位置

    1. 相机在世界坐标系下的位置

    根据相机坐标系与世界坐标系之间的关系可知:

    因为相机在相机坐标系下的坐标为,因此可得:

    4.为ApriTag论文源码 

    1. /*调试经验:
    2. 1.相机位姿的精度主要取决于tag四个角点的检测像素精度。根据以前测试经验,存在角点误检,导致位姿误差大。主要有两种情况:
    3. 一、运动(特别是转弯)过程中照片有点模糊
    4. 二、相机与tag存在较大的倾角(30度以外误差比较大)
    5. 三、距离越远,误差越大,2m以外谨慎使用。
    6. 可能还要其他情况导致误检。
    7. 另外相机的内参也会影响计算的位姿,一定要标定好相机内参(重投影误差<0.15),做好畸变校正。
    8. 故需要加入严格的判断,最好让相机是正对mark。
    9. */
    10. #include "tag_station_detection.hpp"
    11. CREATE_LOGex(TagStationDetectionTask);
    12. TagStationDetection::TagStationDetection()
    13. {
    14. td_ = apriltag_detector_create();
    15. tf_ = tag36h11_create();
    16. // apriltag_detector_add_family(td_, tf_);
    17. apriltag_detector_add_family_bits(td_, tf_, 3);
    18. td_->quad_decimate = 1.0; // Decimate input image by factor //降低图片分辨率,经实验该参数可大幅提升检测时间且不影响检测结果
    19. td_->quad_sigma = 0; // No blur //高斯降噪,如果噪声较大的图片,可使用该参数
    20. td_->nthreads = 2; // 2 treads provided //使用几个线程来运行
    21. td_->debug = 0; // No debuging output
    22. td_->refine_edges = 1;
    23. info_.tagsize = APRILTAG_DETECTED_SIZE;
    24. cv::Mat K, D;
    25. if (HIGH_RESOLUTION)
    26. {
    27. K = (cv::Mat_<double>(3, 3) << 471.33715, 0.0, 437.19435, 0.0, 472.0701, 275.1862, 0.0, 0.0, 1.0);
    28. D = (cv::Mat_<double>(5, 1) << -0.305921, 0.060122, -0.000640, 0.001716, 0.000000);
    29. }
    30. else
    31. {
    32. // K = (cv::Mat_<double>(3, 3) << 343.029387, 0.0, 320.552669, 0.0, 342.238068, 200.402539, 0.0, 0.0, 1.0);
    33. // D = (cv::Mat_<double>(5, 1) << -0.343152, 0.095372, -0.003143, -0.001278, 0.000000);
    34. K = (cv::Mat_<double>(3, 3) << 319.1774094207188, 0.0, 315.9695951747343, 0.0, 319.5137994758127, 199.07306955419622, 0.0, 0.0, 1.0);
    35. D = (cv::Mat_<double>(5, 1) << -0.3204769598908819, 0.07196729379479753, -0.0019994529282904004, 0.0008377275431221735, 0.0);
    36. }
    37. cv::Size img_size(img_width_, img_height_);
    38. initUndistortRectifyMap(K, D, cv::Mat(), K, img_size, CV_16SC2, map1_, map2_);//图片去畸变
    39. cv::Mat new_camera_matric = cv::getOptimalNewCameraMatrix(K, D, img_size, 1);
    40. cv::initUndistortRectifyMap(K, D, cv::Mat(), new_camera_matric, img_size, CV_16SC2, map1_, map2_);
    41. if (TAG_IMG_UNDISTORT)
    42. K = new_camera_matric;
    43. info_.fx = K.at<double>(0, 0);
    44. info_.fy = K.at<double>(1, 1);
    45. info_.cx = K.at<double>(0, 2);
    46. info_.cy = K.at<double>(1, 2);
    47. Eigen::Vector3d ea_cam_slope(0, 0, -37.0 / 180 * CV_PI);
    48. // Eigen::Vector3d ea_cam_slope(0, 0, 0.0 / 180 * CV_PI);
    49. //绕不同轴进行旋转 旋转相机 ZYX 相机变换
    50. rot_cam_slope_ = Eigen::AngleAxisd(ea_cam_slope[0], Eigen::Vector3d::UnitZ()) * //默认为单位矩阵,绕z轴旋转0度,结果为单位矩阵
    51. Eigen::AngleAxisd(ea_cam_slope[1], Eigen::Vector3d::UnitY()) * //绕y轴旋转0弧度,结果不变
    52. Eigen::AngleAxisd(ea_cam_slope[2], Eigen::Vector3d::UnitX()); //绕x轴旋转-37/180×pi弧度 绕x轴右乘
    53. //绕z轴旋转90度,绕y轴旋转-180 绕x轴旋转90,在z方向向上平移 0.1m
    54. Eigen::Quaterniond quat_b_c(0.5, -0.5, 0.5, -0.5); //w,x,y,z,四元数4×1 90 -180 90 -0.5i+0.5j-0.5k+0.5
    55. Eigen::Matrix3d rot_mat_b_c;
    56. rot_mat_b_c = quat_b_c.toRotationMatrix();//四元数转矩阵
    57. Eigen::Translation3d trans_b_c(0.0, 0.0, 0.1);
    58. Tb_c_ = trans_b_c * rot_mat_b_c;
    59. //绕z轴旋转0度,绕y轴旋转-90度,绕x轴旋转90度,平移0
    60. Eigen::Quaterniond quat_t_tt(0.5, 0.5, -0.5, 0.5); // w,x,y,z
    61. Eigen::Matrix3d rot_mat_t_tt = quat_t_tt.toRotationMatrix();
    62. Eigen::Translation3d trans_t_tt(0, 0, 0);
    63. Tt_tt_ = trans_t_tt * rot_mat_t_tt;
    64. //不进行旋转,在x负方向平移-0.21
    65. Eigen::Quaterniond quat_car(1, 0, 0, 0); // w,x,y,z
    66. Eigen::Matrix3d rot_car = quat_car.toRotationMatrix();
    67. Eigen::Translation3d trans_car(-0.21, 0, 0);
    68. T_car_ = trans_car * rot_car;
    69. }
    70. TagStationDetection::~TagStationDetection()
    71. {
    72. }
    73. bool TagStationDetection::OpenCam(int cam_id)
    74. {
    75. LINFO(TagStationDetectionTask, "Enabling video capture");
    76. cap_ = cv::VideoCapture(cam_id);
    77. cap_.set(cv::CAP_PROP_FRAME_WIDTH, img_width_);
    78. cap_.set(cv::CAP_PROP_FRAME_HEIGHT, img_height_);
    79. cv::TickMeter meter;
    80. meter.start();
    81. if (!cap_.isOpened())
    82. {
    83. LERROR(TagStationDetectionTask, "Couldn't open video capture device");
    84. return false;
    85. }
    86. else
    87. {
    88. meter.stop();
    89. LINFO(TagStationDetectionTask,
    90. "Detector initialized in " << meter.getTimeSec() << "s"
    91. << ", " << cap_.get(cv::CAP_PROP_FRAME_WIDTH)
    92. << "x" << cap_.get(cv::CAP_PROP_FRAME_HEIGHT)
    93. << " @" << cap_.get(cv::CAP_PROP_FPS) << "FPS");
    94. meter.reset();
    95. return true;
    96. }
    97. }
    98. void TagStationDetection::ReleaseCam()
    99. {
    100. cap_.release();
    101. cv::destroyAllWindows();
    102. }
    103. bool TagStationDetection::OpenVideo(const std::string &video_name)
    104. {
    105. cap_.open(video_name.c_str());
    106. if (!cap_.isOpened())
    107. {
    108. std::cerr << "Couldn't open video " << video_name.c_str() << std::endl;
    109. return false;
    110. }
    111. else
    112. return true;
    113. }
    114. bool TagStationDetection::DetectCamFrameTag(TagPose &tag_pose)
    115. {
    116. bool if_found = false;
    117. cap_ >> frame_;
    118. if (frame_.empty())
    119. return if_found;
    120. if (TAG_IMG_UNDISTORT)
    121. {
    122. cv::Mat UndistortImage;
    123. remap(frame_, UndistortImage, map1_, map2_, cv::INTER_LINEAR);//去畸变
    124. frame_ = UndistortImage;
    125. }
    126. cvtColor(frame_, gray_, cv::COLOR_BGR2GRAY);
    127. image_u8_t im = {.width = gray_.cols,
    128. .height = gray_.rows,
    129. .stride = gray_.cols,
    130. .buf = gray_.data};//转为apriltag库的格式
    131. zarray_t *detections = apriltag_detector_detect(td_, &im);//检测tag
    132. /*typedef struct apriltag_detection apriltag_detection_t;
    133. struct apriltag_detection
    134. {
    135. apriltag_family_t *family;
    136. int id;//tag的id
    137. int hamming;
    138. float decision_margin;
    139. matd_t *H; //代表从(-1,1), (1,1), (1,-1),(-1,-1)转到像素坐标
    140. double c[2]; //tag中心点在像素坐标下的坐标
    141. double p[4][2]; //像素坐标下的坐标,4个点从左下角开始逆时针旋转
    142. };*/
    143. for (int i = 0; i < zarray_size(detections); i++)
    144. {
    145. apriltag_detection_t *det;
    146. zarray_get(detections, i, &det);
    147. if (det->id == 0)//0个tag
    148. {
    149. if (det->decision_margin > 50.0)
    150. {
    151. if_found = true;
    152. info_.det = det;
    153. apriltag_pose_t pose;
    154. // pose origin : center of tag
    155. //使用apriltag自带的库求解。这就会涉及库的坐标系和我们的坐标系转换
    156. /*apriltag_detection_info_t tag_info;
    157. tag_info.cx=cameraParam.m_cx;
    158. tag_info.cy=cameraParam.m_cy;
    159. tag_info.fx=cameraParam.m_fx;
    160. tag_info.fy=cameraParam.m_fy;
    161. tag_info.tagsize=find_mark.length;
    162. tag_info.det=det;
    163. apriltag_pose_t pose;
    164. estimate_tag_pose(&tag_info, &pose);
    165. Vector3d ori_relative_P;
    166. Matrix3d ori_rotation_matrix3d;
    167. memcpy(&ori_relative_P, pose.t->data, sizeof(Vector3d));
    168. memcpy(&ori_rotation_matrix3d, pose.R->data, sizeof(Matrix3d));*/
    169. double err = estimate_tag_pose(&info_, &pose);
    170. CalculatePose(pose, tag_pose);
    171. if (APRILTAG_DETECTED_IMG_VISUALIZATION)
    172. DetectedVisualization(det);
    173. }
    174. break;
    175. }
    176. else
    177. continue;
    178. }
    179. if (not if_found)
    180. {
    181. if (APRILTAG_DETECTED_IMG_VISUALIZATION)
    182. DetectedVisualization();
    183. }
    184. apriltag_detections_destroy(detections);
    185. return if_found;
    186. }
    187. bool TagStationDetection::DetectImg(const std::string &img_name, TagPose &tag_pose)
    188. {
    189. bool if_found = false;
    190. cv::Mat imge_raw = cv::imread(img_name.c_str());
    191. if (TAG_IMG_UNDISTORT)
    192. {
    193. cv::Mat UndistortImage;
    194. remap(imge_raw, UndistortImage, map1_, map2_, cv::INTER_LINEAR);
    195. // cv::imwrite("UndistortImage.png", UndistortImage);
    196. frame_ = UndistortImage;
    197. }
    198. else
    199. frame_ = imge_raw;
    200. cvtColor(frame_, gray_, cv::COLOR_BGR2GRAY);
    201. image_u8_t im = {.width = gray_.cols,
    202. .height = gray_.rows,
    203. .stride = gray_.cols,
    204. .buf = gray_.data};
    205. zarray_t *detections = apriltag_detector_detect(td_, &im);
    206. for (int i = 0; i < zarray_size(detections); i++)
    207. {
    208. apriltag_detection_t *det;
    209. zarray_get(detections, i, &det);
    210. if (det->id == 0)
    211. {
    212. if_found = true;
    213. info_.det = det;
    214. apriltag_pose_t pose;
    215. double err = estimate_tag_pose(&info_, &pose);
    216. CalculatePose(pose, tag_pose);
    217. if (APRILTAG_DETECTED_IMG_VISUALIZATION)
    218. DetectedVisualization(det, 1);
    219. break;
    220. }
    221. else
    222. continue;
    223. }
    224. if (not if_found)
    225. LWARN(TagStationDetectionTask, "No effective tag detected");
    226. apriltag_detections_destroy(detections);
    227. return if_found;
    228. }
    229. void TagStationDetection::DetectedVisualization(apriltag_detection_t *det, int d)
    230. {
    231. line(frame_, cv::Point(det->p[0][0], det->p[0][1]),
    232. cv::Point(det->p[1][0], det->p[1][1]),
    233. cv::Scalar(0, 0xff, 0), 2);
    234. line(frame_, cv::Point(det->p[0][0], det->p[0][1]),
    235. cv::Point(det->p[3][0], det->p[3][1]),
    236. cv::Scalar(0, 0, 0xff), 2);
    237. line(frame_, cv::Point(det->p[1][0], det->p[1][1]),
    238. cv::Point(det->p[2][0], det->p[2][1]),
    239. cv::Scalar(0xff, 0, 0), 2);
    240. line(frame_, cv::Point(det->p[2][0], det->p[2][1]),
    241. cv::Point(det->p[3][0], det->p[3][1]),
    242. cv::Scalar(0xff, 0, 0), 2);
    243. std::cout << "pixel coord: (" << det->p[0][0] << ", " << det->p[0][1] << "); "
    244. << " (" << det->p[1][0] << ", " << det->p[1][1] << ")" << std::endl;
    245. std::stringstream ss;
    246. ss << det->id;
    247. cv::String text = ss.str();
    248. int fontface = cv::FONT_HERSHEY_SCRIPT_SIMPLEX;
    249. double fontscale = 1.0;
    250. int baseline;
    251. cv::Size textsize = cv::getTextSize(text, fontface, fontscale, 2,
    252. &baseline);
    253. putText(frame_, text, cv::Point(det->c[0] - textsize.width / 2, det->c[1] + textsize.height / 2),
    254. fontface, fontscale, cv::Scalar(0xff, 0x99, 0), 2);
    255. imshow("Tag Detections", frame_);
    256. cv::waitKey(d);
    257. }
    258. void TagStationDetection::DetectedVisualization()
    259. {
    260. imshow("Tag Detections", frame_);
    261. cv::waitKey(1);
    262. }
    263. void TagStationDetection::CalculatePose(const apriltag_pose_t &pose, TagPose &tag_pose)
    264. {
    265. // pose.R->data a matrix structure for holding double-precision values with
    266. // data in row-major order (i.e. index = row*ncols + col).
    267. Eigen::Matrix3d rot_mat_cam_tag = rot_cam_slope_ * (Eigen::Map<Eigen::Matrix3d>(pose.R->data).inverse());//转置
    268. //pose.R->data tag系下camera位置 (pose.R->data).inverse()=(pose.R->data).transpose() Tt_c(-1=Tc_t
    269. Eigen::Vector3d trans_vector = -Eigen::Map<Eigen::Vector3d>(pose.t->data);
    270. Eigen::Translation3d translation(trans_vector[0], trans_vector[1], trans_vector[2]);
    271. //Translation3d是表示平移的一种形式,构造函数为Translation3d(x,y,z),它可以直接和旋转向量、四元数、旋转矩阵相乘,得到的结果为Affine3d类型
    272. Eigen::Affine3d Tc_t = translation * rot_mat_cam_tag;//相机坐标系下tag位置 -R(-1)*T
    273. Eigen::Affine3d T_target;
    274. T_target.matrix() = (Tb_c_.matrix() * Tc_t.matrix() * Tt_tt_.matrix()).inverse();//计算tag坐标系下相机的位置 Tb_tt(-1)=Ttt_b
    275. Eigen::Vector3d rpy_target = R2ypr(T_target.rotation());
    276. tag_pose.x = T_target.matrix()(0, 3);
    277. tag_pose.y = T_target.matrix()(1, 3);
    278. tag_pose.yaw = rpy_target(0);
    279. Eigen::Affine3d T_target_car;
    280. T_target_car.matrix() = T_target.matrix() * T_car_.matrix();
    281. tag_pose.y_car = T_target_car.matrix()(1, 3);
    282. std::cout << "pose x,y,z,yaw: " << tag_pose.x << ", " << tag_pose.y << ", " << T_target.matrix()(2, 3) << std::endl;
    283. }
    284. Eigen::Vector3d TagStationDetection::R2ypr(const Eigen::Matrix3d &R)
    285. {
    286. Eigen::Vector3d n = R.col(0);
    287. Eigen::Vector3d o = R.col(1);
    288. Eigen::Vector3d a = R.col(2);
    289. Eigen::Vector3d ypr(3);
    290. double y = atan2(n(1), n(0));
    291. double p = atan2(-n(2), n(0) * cos(y) + n(1) * sin(y));
    292. double r = atan2(a(0) * sin(y) - a(1) * cos(y), -o(0) * sin(y) + o(1) * cos(y));
    293. ypr(0) = y;
    294. ypr(1) = p;
    295. ypr(2) = r;
    296. return ypr;
    297. }
    298. void TagStationPub::init()
    299. {
    300. ctx_ = zmq::context_t{1};
    301. publisher_ = zmq::socket_t(ctx_, zmq::socket_type::pub);
    302. publisher_.connect("tcp://127.0.0.1:5555");
    303. }
    304. void TagStationPub::TagPosePublish(const TagPose &tag_pose)
    305. {
    306. pose_2d_.data = tag_pose;
    307. Pose2dPublish();
    308. LINFO(TagStationDetectionTask,
    309. "pub tag pose (x, y, yaw, y_car): " << pose_2d_.data.x << ", " << pose_2d_.data.y
    310. << ", " << pose_2d_.data.yaw << ". " << pose_2d_.data.y_car);
    311. }
    312. void TagStationPub::TagPosePublish()
    313. {
    314. pose_2d_.data.x = 15;
    315. pose_2d_.data.y = 15;
    316. pose_2d_.data.yaw = 15;
    317. pose_2d_.data.y_car = 15;
    318. Pose2dPublish();
    319. LWARN(TagStationDetectionTask,
    320. "no detection, pub: " << pose_2d_.data.x << ", " << pose_2d_.data.y
    321. << ", " << pose_2d_.data.yaw << ". " << pose_2d_.data.y_car);
    322. }
    323. void TagStationPub::Pose2dPublish()
    324. {
    325. publisher_.send(zmq::str_buffer("/charge_station_detection "), zmq::send_flags::sndmore);
    326. zmq::message_t msg(sizeof(pose_2d_));
    327. memcpy(msg.data(), pose_2d_.buffer, sizeof(pose_2d_));
    328. publisher_.send(msg);
    329. }

    apritag_vision: 充电站与apritag之间进行定位,确定车站与车之间的位置关系

    5.算法对比验证,opencvpnp求解

     把tag检测的角点进行Pnp求解,角度较小时结果可以,角度大时结果不如正叫迭代法。

    1. cv::Mat TagStationDetection::tag_pos_pnp(double p[4][2], apriltag_pose_t *pose)
    2. {
    3. std::vector<cv::Point3f> point3d;
    4. float half_x = APRILTAG_DETECTED_SIZE / 2.0f;
    5. float half_y = APRILTAG_DETECTED_SIZE / 2.0f; // height_target / 2.0; hight 30cm 宽12cm
    6. object2d_point.clear();
    7. cv::Point2f points;
    8. for (int i = 0; i < 4; i++)
    9. {
    10. points.x = p[i][0];
    11. points.y = p[i][1];
    12. object2d_point.push_back(points);
    13. }
    14. point3d.push_back(cv::Point3f(-half_x, half_y, 0));
    15. point3d.push_back(cv::Point3f(half_x, half_y, 0));
    16. point3d.push_back(cv::Point3f(half_x, -half_y, 0));
    17. point3d.push_back(cv::Point3f(-half_x, -half_y, 0));
    18. cv::Mat rot = cv::Mat_<double>(3, 3);
    19. cv::Mat trans;
    20. cv::Mat Rvec;
    21. // Mat cam_matrix = (Mat_<double>(3, 3) << 578.6274, 0, 334.7460, 0, 576.9578, 214.0938, 0, 0, 1); // 1981.3, -6.2, 684, 0, 2006.7, 504, 0, 0, 1
    22. cv::Mat cam_matrix = (cv::Mat_<double>(3, 3) << 471.33715, 0.0, 437.19435, 0.0, 472.0701, 275.1862, 0.0, 0.0, 1.0); // 1981.3, -6.2, 684, 0, 2006.7, 504, 0, 0, 1
    23. // cv::Mat cam_matrix = cv::Mat_<double>(3, 3);
    24. // Mat distortion_coeff = (Mat_<double>(5, 1) << -0.4290, 0.2780, 0, 0, 0); //-0.1029, 0.0058, -0.0030, 0.0047,0
    25. cv::Mat distortion_coeff = (cv::Mat_<double>(5, 1) << -0.305921, 0.060122, -0.000640, 0.001716, 0.000000);
    26. cv::solvePnP(point3d, object2d_point, cam_matrix, distortion_coeff, rot, trans, false, CV_EPNP); // CV_P3P); // CV_EPNP);
    27. cv::Mat inliers;
    28. // solvePnPRansac(point3d, object2d_point, cam_matrix, distortion_coeff, rot, trans, false, 50, 2, 0.98, inliers, 0);
    29. // std::cout << "rot size is:" << rot.size() << rot << std::endl;
    30. // std::cout << "trans size is:" << trans.size() << std::endl;
    31. cv::Mat RR;
    32. double rm[9];
    33. RR = cv::Mat(3, 3, CV_64FC1, rm);
    34. cv::Rodrigues(rot, RR);
    35. double tx = trans.at<double>(0, 0);
    36. double ty = trans.at<double>(1, 0);
    37. double tz = trans.at<double>(2, 0);
    38. double dis = sqrt(tx * tx + ty * ty + tz * tz);
    39. pose->R = matd_create(3, 3);
    40. for (int i = 0; i < 3; i++)
    41. {
    42. for (int j = 0; j < 3; j++)
    43. {
    44. MALL(pose->R, i, j) = RR.at<double>(i, j); // zhuanzhi
    45. }
    46. }
    47. pose->t = matd_create(3, 1);
    48. for (int i = 0; i < 3; i++)
    49. {
    50. MALL(pose->t, i, 0) = trans.at<double>(i, 0);
    51. }
    52. return trans;
    53. }

    6.PNP求解验证

     实现代码,检测蓝色物体,尺寸25.6 25.6

    1. #include <opencv2/core.hpp>
    2. #include <opencv2/imgcodecs.hpp>
    3. #include <opencv2/highgui.hpp>
    4. #include "opencv2/imgproc/imgproc.hpp"
    5. #include <opencv2/calib3d/calib3d.hpp>
    6. #include <iostream>
    7. using namespace cv;
    8. using namespace std;
    9. void drawRotatedRect(cv::Mat &img, const cv::RotatedRect &rect, const cv::Scalar &color, int thickness)
    10. {
    11. cv::Point2f Vertex[4];
    12. rect.points(Vertex);
    13. for (int i = 0; i < 4; i++)
    14. {
    15. cv::line(img, Vertex[i], Vertex[(i + 1) % 4], color, thickness);
    16. }
    17. }
    18. void getTarget2dPoints(cv::RotatedRect object_rect, std::vector<Point2f> &object2d_point)
    19. {
    20. cv::Point2f vertices[4];
    21. object_rect.points(vertices);
    22. cv::Point2f lu, ld, ru, rd;
    23. std::sort(vertices, vertices + 4, [](const cv::Point2f &p1, const cv::Point2f &p2)
    24. { return p1.x < p2.x; });
    25. if (vertices[0].y < vertices[1].y)
    26. {
    27. lu = vertices[0];
    28. ld = vertices[1];
    29. }
    30. else
    31. {
    32. lu = vertices[1];
    33. ld = vertices[0];
    34. }
    35. if (vertices[2].y < vertices[3].y)
    36. {
    37. ru = vertices[2];
    38. rd = vertices[3];
    39. }
    40. else
    41. {
    42. ru = vertices[3];
    43. rd = vertices[2];
    44. }
    45. object2d_point.clear();
    46. object2d_point.push_back(lu);
    47. object2d_point.push_back(ru);
    48. object2d_point.push_back(rd);
    49. object2d_point.push_back(ld);
    50. }
    51. //
    52. /*
    53. int iLowH = 156;
    54. int iLowS = 43;
    55. int iLowV = 46;
    56. int iHighH = 180;
    57. int iHighS = 255;
    58. int iHighV = 255;*/
    59. //
    60. /*
    61. int iLowH = 0;
    62. int iLowS = 0;
    63. int iLowV = 0;
    64. int iHighH = 180;
    65. int iHighS = 255;
    66. int iHighV = 46;*/
    67. //
    68. /*int iLowH = 11;
    69. int iLowS = 43;
    70. int iLowV = 46;
    71. int iHighH = 25;
    72. int iHighS = 255;
    73. int iHighV = 255;*/
    74. //
    75. /*int iLowH = 0;
    76. int iLowS = 0;
    77. int iLowV = 221;
    78. int iHighH = 180;
    79. int iHighS = 30;
    80. int iHighV = 255;*/
    81. //
    82. /*int iLowH = 156;
    83. int iLowS = 43;
    84. int iLowV = 46;
    85. int iHighH = 180;
    86. int iHighS = 255;
    87. int iHighV = 255;*/
    88. //
    89. int iLowH = 100;
    90. int iLowS = 43;
    91. int iLowV = 46;
    92. int iHighH = 124;
    93. int iHighS = 255;
    94. int iHighV = 255;
    95. //
    96. /*
    97. int iLowH = 0;
    98. int iLowS = 0;
    99. int iLowV = 46;
    100. int iHighH = 180;
    101. int iHighS = 43;
    102. int iHighV = 220;*/
    103. int main()
    104. {
    105. cv::VideoCapture cap(2);
    106. Mat gray;
    107. Mat binImg;
    108. Mat hsv_img;
    109. Mat imgThresholded;
    110. cv::Mat frame;
    111. if (!cap.isOpened())
    112. {
    113. return 1;
    114. }
    115. while (true)
    116. {
    117. cap >> frame;
    118. cvtColor(frame, gray, COLOR_BGR2GRAY);
    119. threshold(gray, binImg, 40, 255, THRESH_BINARY_INV);
    120. vector<vector<cv::Point>> bin_contours;
    121. cvtColor(frame, hsv_img, COLOR_BGR2HSV);
    122. inRange(hsv_img, Scalar(iLowH, iLowS, iLowV), Scalar(iHighH, iHighS, iHighV), imgThresholded); // Threshold the image
    123. // blur(imgThresholded,imgThresholded,Size(5,5));
    124. // 开操作 (去除一些噪点) 如果二值化后图片干扰部分依然很多,增大下面的size
    125. Mat element = getStructuringElement(MORPH_RECT, Size(5, 5));
    126. morphologyEx(imgThresholded, imgThresholded, MORPH_OPEN, element);
    127. // 开运算:先腐蚀后膨胀。 能够排除小亮点。
    128. // 闭运算:先膨胀后腐蚀。能够排除小黑点。
    129. // 闭操作 (连接一些连通域)
    130. morphologyEx(imgThresholded, imgThresholded, MORPH_CLOSE, element);
    131. erode(imgThresholded, imgThresholded, element); // 腐蚀
    132. binImg = imgThresholded;
    133. vector<Vec4i> hierarchy;
    134. cv::findContours(binImg, bin_contours, hierarchy, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE, Point());
    135. Mat imageContours = Mat::zeros(binImg.size(), CV_8UC1);
    136. Mat Contours = Mat::zeros(binImg.size(), CV_8UC1); // 绘制
    137. for (uint i = 0; i < bin_contours.size(); i++)
    138. {
    139. for (int j = 0; j < bin_contours[i].size(); j++)
    140. {
    141. // 绘制出contours向量内所有的像素点
    142. Point P = Point(bin_contours[i][j].x, bin_contours[i][j].y);
    143. Contours.at<uchar>(P) = 255;
    144. }
    145. drawContours(imageContours, bin_contours, i, Scalar(255), 1, 8, hierarchy);
    146. cv::RotatedRect RRect = minAreaRect(bin_contours[i]);
    147. std::cout << "rect_size:" << RRect.size.area() << std::endl;
    148. /*if (RRect.size.area() < 200 || RRect.size.area() > 100000)
    149. continue;
    150. if (max(RRect.size.width, RRect.size.height) / min(RRect.size.width, RRect.size.height) > 1.3)
    151. continue;*/
    152. if (RRect.size.area() < 2000)
    153. continue;
    154. drawRotatedRect(frame, RRect, cv::Scalar(255, 0, 0), 2); // 画出红色大于200的矩形
    155. Mat rot_vector, translation_vector;
    156. vector<Point2f> object2d_point;
    157. getTarget2dPoints(RRect, object2d_point);
    158. std::vector<cv::Point3f> point3d;
    159. float half_x = 25.6f / 2.0f;
    160. float half_y = 25.6f / 2.0f; // height_target / 2.0; hight 30cm 宽12cm
    161. point3d.push_back(Point3f(-half_x, half_y, 0));
    162. point3d.push_back(Point3f(half_x, half_y, 0));
    163. point3d.push_back(Point3f(half_x, -half_y, 0));
    164. point3d.push_back(Point3f(-half_x, -half_y, 0));
    165. cv::Mat rot;
    166. cv::Mat trans;
    167. // Mat cam_matrix = (Mat_<double>(3, 3) << 578.6274, 0, 334.7460, 0, 576.9578, 214.0938, 0, 0, 1); // 1981.3, -6.2, 684, 0, 2006.7, 504, 0, 0, 1
    168. Mat cam_matrix = (Mat_<double>(3, 3) << 471.33715, 0.0, 437.19435, 0.0, 472.0701, 275.1862, 0.0, 0.0, 1.0); // 1981.3, -6.2, 684, 0, 2006.7, 504, 0, 0, 1
    169. // Mat distortion_coeff = (Mat_<double>(5, 1) << -0.4290, 0.2780, 0, 0, 0); //-0.1029, 0.0058, -0.0030, 0.0047,0
    170. Mat distortion_coeff = (Mat_<double>(5, 1) << -0.305921, 0.060122, -0.000640, 0.001716, 0.000000);
    171. cv::solvePnP(point3d, object2d_point, cam_matrix, distortion_coeff, rot, trans);
    172. std::cout << "rot size is :" << rot.size() << std::endl;
    173. // void solvePnP(InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray distCoeffs, OutputArray rvec, OutputArray tvec, bool useExtrinsicGuess=false, int flags = CV_ITERATIVE)
    174. // objectPoints - 世界坐标系下的控制点的坐标,vector<Point3f> 的数据类型在这里可以使用
    175. // imagePoints - 在图像坐标系下对应的控制点的坐标。vector<Point2f> 在这里可以使用
    176. // cameraMatrix - 相机的内参矩阵 distCoeffs - 相机的畸变系数
    177. double tx = trans.at<double>(0, 0);
    178. double ty = trans.at<double>(1, 0);
    179. double tz = trans.at<double>(2, 0);
    180. double dis = sqrt(tx * tx + ty * ty + tz * tz);
    181. cout << "dis:" << dis << "tx:" << tx << "ty:" << ty << "tz:" << tz << endl;
    182. cv::putText(frame, to_string(dis) + "cm", Point(50, 50), 1, 2, Scalar(0, 255, 100), 2);
    183. cv::putText(frame, to_string(tz) + "cm", Point(300, 50), 1, 2, Scalar(0, 255, 100), 2);
    184. cv::putText(frame, to_string(tx) + "cm", Point(50, 100), 1, 2, Scalar(0, 255, 100), 2);
    185. }
    186. imshow("1", frame);
    187. imshow("3", hsv_img);
    188. imshow("2", imgThresholded);
    189. imshow("4", imageContours);
    190. if (waitKey(10) == 'q')
    191. break;
    192. }
    193. }

    7.PNP求解BA迭代

    PnP(Perspective-n-Point)问题:各种算法总结分析 - 知乎

    光束法平差(BA,Bundle Adjustment):与正交迭代法相同

    上面的方法都是线性的,我们还可以把PnP问题构建为一个关于重投影误差的非线性最小二乘问题。线性方法往往是通过将空间点的位置看做已知量,通过构建方程组求解相机位姿。而非线性优化则将相机位姿和空间点位置都看成优化变量,放在一起优化,这也是BA的另一个名字:捆集优化的由来(个人理解)。这种方法将相机位姿和三维点位置放在一起进行重投影误差最小化的优化,是一种整体的优化方法。

     

    1. void bundleAdjustmentGaussNewton(
    2. const VecVector3d &points_3d,
    3. const VecVector2d &points_2d,
    4. const Mat &K,
    5. Sophus::SE3d &pose)
    6. { //传入空间点点,空间点像素坐标,内参; 初始pose,此程序中初始pose为0;
    7. typedef Eigen::Matrix<double, 6, 1> Vector6d;
    8. const int iterations = 10;
    9. double cost = 0, lastCost = 0;
    10. double fx = K.at<double>(0, 0);
    11. double fy = K.at<double>(1, 1);
    12. double cx = K.at<double>(0, 2);
    13. double cy = K.at<double>(1, 2);
    14. for (int iter = 0; iter < iterations; iter++) { //进行迭代
    15. Eigen::Matrix<double, 6, 6> H = Eigen::Matrix<double, 6, 6>::Zero();
    16. Vector6d b = Vector6d::Zero();
    17. cost = 0;
    18. // compute cost
    19. for (int i = 0; i < points_3d.size(); i++) {
    20. Eigen::Vector3d pc = pose * points_3d[i];//得到空间点在相机坐标系下的坐标
    21. // Vector6d se3 = pose.log();
    22. // cout<<"se3 = "<<se3.transpose()<<endl;一开始se30,三四次迭代后趋于稳定
    23. double inv_z = 1.0 / pc[2];
    24. double inv_z2 = inv_z * inv_z;
    25. Eigen::Vector2d proj (fx * pc[0] / pc[2] + cx, fy * pc[1] / pc[2] + cy);//P点像素坐标
    26. //的投影计算值
    27. Eigen::Vector2d e = points_2d[i] - proj;//作差,得到差值;是观测值-预测值!
    28. cost += e.squaredNorm();//误差的二范数的平方
    29. Eigen::Matrix<double, 2, 6> J;//2*6的雅克比矩阵,即误差相对于位姿求导,通过导数可以知道有了误
    30. //差以后我们应该往哪个方向去优化
    31. J << -fx * inv_z,
    32. 0,
    33. fx * pc[0] * inv_z2,
    34. fx * pc[0] * pc[1] * inv_z2,
    35. -fx - fx * pc[0] * pc[0] * inv_z2,
    36. fx * pc[1] * inv_z,
    37. 0,
    38. -fy * inv_z,
    39. fy * pc[1] * inv_z,
    40. fy + fy * pc[1] * pc[1] * inv_z2,
    41. -fy * pc[0] * pc[1] * inv_z2,
    42. -fy * pc[0] * inv_z;
    43. H += J.transpose() * J;//高斯牛顿方法,H*dx=b
    44. b += -J.transpose() * e;
    45. }
    46. Vector6d dx;
    47. dx = H.ldlt().solve(b);//对H做LDLT分解,并求解dx
    48. cout <<"dx:" << endl << dx <<endl;
    49. cout <<"dx.norm():" << endl << dx.norm() <<endl;
    50. if (isnan(dx[0])) {
    51. cout << "result is nan!" << endl;
    52. break;
    53. }
    54. if (iter > 0 && cost >= lastCost) {
    55. // cost increase, update is not good//发散的情况
    56. cout << "cost: " << cost << ", last cost: " << lastCost << endl;
    57. break;
    58. }
    59. // update your estimation
    60. pose = Sophus::SE3d::exp(dx) * pose;
    61. lastCost = cost;
    62. cout << "iteration " << iter << " cost=" << cout.precision(12) << cost << endl;
    63. if (dx.norm() < 1e-6) {
    64. // converge 当dx<1e-6时停止迭代
    65. break;
    66. }
    67. }
    68. cout << "pose by g-n: \n" << pose.matrix() << endl;
    69. }

     每种算法用一句话概括:

    • DLT:根据n个点的世界坐标和相机归一化平面坐标,最后一行用于消去深度,得到2n个约束方程,利用SVD求解超定方程并得到位姿矩阵的估计。
    • P3P:根据3个点的世界坐标和相机归一化平面坐标,利用余弦定理几何关系,得到3个点的相机坐标,将问题转化为3D-3D位姿估计并利用ICP求解,最后还需要一对点用于验证。
    • EPnP:根据n个点的世界坐标选择4个控制点并计算加权系数,通过相机模型和n个点的像素坐标求解4个控制点在相机坐标系下的坐标,进而得到n个点在相机坐标系下的坐标,将问题转化为3D-3D位姿估计并利用ICP求解。
    • BA:根据对应点的重投影误差构建非线性优化问题,利用李代数得到误差关于位姿的导数以指导优化方向,不断迭代求得所有对应点重投影误差之和最小的位姿估计。
  • 相关阅读:
    springboot配置
    【VeighNa】开始量化交易——第三章:构建价差套利
    Advanced .Net Debugging 8:线程同步
    Codeforces Round #833 (Div. 2) C. Zero-Sum Prefixes
    Abnova ABCB10(人)重组蛋白说明书
    winsw 注册的java服务jps process information unavailable
    常见的几种垃圾回收器
    Linux权限与用户
    Vue Vuex模块化编码
    Wi-Fi7将带来前所未有的快捷、稳定的互联网,更快的传输速度
  • 原文地址:https://blog.csdn.net/JanKin_BY/article/details/127900981