From a9f23893edac37f04fcc50331e840681e224804f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=D0=98=D0=B2=D0=B0=D0=BD=20=D0=9F=D0=BE=D0=B4=D0=BC=D0=BE?= =?UTF-8?q?=D0=B3=D0=B8=D0=BB=D1=8C=D0=BD=D1=8B=D0=B9?= Date: Wed, 31 Aug 2022 17:35:24 +0300 Subject: [PATCH 1/6] Update 'ReadMe.md' --- ReadMe.md | 181 +++--------------------------------------------------- 1 file changed, 7 insertions(+), 174 deletions(-) diff --git a/ReadMe.md b/ReadMe.md index bb0e40b..04879e5 100644 --- a/ReadMe.md +++ b/ReadMe.md @@ -1,176 +1,9 @@ -# OpenVINS with the additional features for AR tasks. -Please refer to Redmine page: https://redmine.drivecast.tech/issues/375 for now to track the history and understand the feature set. - -# OpenVINS - -[![ROS 1 Workflow](https://github.com/rpng/open_vins/actions/workflows/build_ros1.yml/badge.svg)](https://github.com/rpng/open_vins/actions/workflows/build_ros1.yml) -[![ROS 2 Workflow](https://github.com/rpng/open_vins/actions/workflows/build_ros2.yml/badge.svg)](https://github.com/rpng/open_vins/actions/workflows/build_ros2.yml) -[![ROS Free Workflow](https://github.com/rpng/open_vins/actions/workflows/build.yml/badge.svg)](https://github.com/rpng/open_vins/actions/workflows/build.yml) - -Welcome to the OpenVINS project! -The OpenVINS project houses some core computer vision code along with a state-of-the art filter-based visual-inertial -estimator. The core filter is an [Extended Kalman filter](https://en.wikipedia.org/wiki/Extended_Kalman_filter) which -fuses inertial information with sparse visual feature tracks. These visual feature tracks are fused leveraging -the [Multi-State Constraint Kalman Filter (MSCKF)](https://ieeexplore.ieee.org/document/4209642) sliding window -formulation which allows for 3D features to update the state estimate without directly estimating the feature states in -the filter. Inspired by graph-based optimization systems, the included filter has modularity allowing for convenient -covariance management with a proper type-based state system. Please take a look at the feature list below for full -details on what the system supports. - -* Github project page - https://github.com/rpng/open_vins -* Documentation - https://docs.openvins.com/ -* Getting started guide - https://docs.openvins.com/getting-started.html -* Publication reference - http://udel.edu/~pgeneva/downloads/papers/c10.pdf - -## News / Events - -* **March 14, 2022** - Initial dynamic initialization open sourcing, asynchronous subscription to inertial readings and publishing of odometry, support for lower frequency feature tracking. See v2.6 [PR#232](https://github.com/rpng/open_vins/pull/232) for details. -* **December 13, 2021** - New YAML configuration system, ROS2 support, Docker images, robust static initialization based on disparity, internal logging system to reduce verbosity, image transport publishers, dynamic number of features support, and other small fixes. See - v2.5 [PR#209](https://github.com/rpng/open_vins/pull/209) for details. -* **July 19, 2021** - Camera classes, masking support, alignment utility, and other small fixes. See - v2.4 [PR#117](https://github.com/rpng/open_vins/pull/186) for details. -* **December 1, 2020** - Released improved memory management, active feature pointcloud publishing, limiting number of - features in update to bound compute, and other small fixes. See - v2.3 [PR#117](https://github.com/rpng/open_vins/pull/117) for details. -* **November 18, 2020** - Released groundtruth generation utility package, [vicon2gt](https://github.com/rpng/vicon2gt) - to enable creation of groundtruth trajectories in a motion capture room for evaulating VIO methods. -* **July 7, 2020** - Released zero velocity update for vehicle applications and direct initialization when standing - still. See [PR#79](https://github.com/rpng/open_vins/pull/79) for details. -* **May 18, 2020** - Released secondary pose graph example - repository [ov_secondary](https://github.com/rpng/ov_secondary) based - on [VINS-Fusion](https://github.com/HKUST-Aerial-Robotics/VINS-Fusion). OpenVINS now publishes marginalized feature - track, feature 3d position, and first camera intrinsics and extrinsics. - See [PR#66](https://github.com/rpng/open_vins/pull/66) for details and discussion. -* **April 3, 2020** - Released [v2.0](https://github.com/rpng/open_vins/releases/tag/v2.0) update to the codebase with - some key refactoring, ros-free building, improved dataset support, and single inverse depth feature representation. - Please check out the [release page](https://github.com/rpng/open_vins/releases/tag/v2.0) for details. -* **January 21, 2020** - Our paper has been accepted for presentation in [ICRA 2020](https://www.icra2020.org/). We look - forward to seeing everybody there! We have also added links to a few videos of the system running on different - datasets. -* **October 23, 2019** - OpenVINS placed first in the [IROS 2019 FPV Drone Racing VIO Competition - ](http://rpg.ifi.uzh.ch/uzh-fpv.html). We will be giving a short presentation at - the [workshop](https://wp.nyu.edu/workshopiros2019mav/) at 12:45pm in Macau on November 8th. -* **October 1, 2019** - We will be presenting at the [Visual-Inertial Navigation: Challenges and Applications - ](http://udel.edu/~ghuang/iros19-vins-workshop/index.html) workshop at [IROS 2019](https://www.iros2019.org/). The - submitted workshop paper can be found at [this](http://udel.edu/~ghuang/iros19-vins-workshop/papers/06.pdf) link. -* **August 21, 2019** - Open sourced [ov_maplab](https://github.com/rpng/ov_maplab) for interfacing OpenVINS with - the [maplab](https://github.com/ethz-asl/maplab) library. -* **August 15, 2019** - Initial release of OpenVINS repository and documentation website! - -## Project Features - -* Sliding window visual-inertial MSCKF -* Modular covariance type system -* Comprehensive documentation and derivations -* Extendable visual-inertial simulator - * On manifold SE(3) b-spline - * Arbitrary number of cameras - * Arbitrary sensor rate - * Automatic feature generation -* Five different feature representations - 1. Global XYZ - 2. Global inverse depth - 3. Anchored XYZ - 4. Anchored inverse depth - 5. Anchored MSCKF inverse depth - 6. Anchored single inverse depth -* Calibration of sensor intrinsics and extrinsics - * Camera to IMU transform - * Camera to IMU time offset - * Camera intrinsics -* Environmental SLAM feature - * OpenCV ARUCO tag SLAM features - * Sparse feature SLAM features -* Visual tracking support - * Monocular camera - * Stereo camera - * Binocular (synchronized) cameras - * KLT or descriptor based - * Masked tracking -* Static and dynamic state initialization -* Zero velocity detection and updates -* Out of the box evaluation on EurocMav, TUM-VI, UZH-FPV, KAIST Urban and VIO datasets -* Extensive evaluation suite (ATE, RPE, NEES, RMSE, etc..) - -## Codebase Extensions - -* **[vicon2gt](https://github.com/rpng/vicon2gt)** - This utility was created to generate groundtruth trajectories using - a motion capture system (e.g. Vicon or OptiTrack) for use in evaluating visual-inertial estimation systems. - Specifically we calculate the inertial IMU state (full 15 dof) at camera frequency rate and generate a groundtruth - trajectory similar to those provided by the EurocMav datasets. Performs fusion of inertial and motion capture - information and estimates all unknown spacial-temporal calibrations between the two sensors. - -* **[ov_maplab](https://github.com/rpng/ov_maplab)** - This codebase contains the interface wrapper for exporting - visual-inertial runs from [OpenVINS](https://github.com/rpng/open_vins) into the ViMap structure taken - by [maplab](https://github.com/ethz-asl/maplab). The state estimates and raw images are appended to the ViMap as - OpenVINS runs through a dataset. After completion of the dataset, features are re-extract and triangulate with - maplab's feature system. This can be used to merge multi-session maps, or to perform a batch optimization after first - running the data through OpenVINS. Some example have been provided along with a helper script to export trajectories - into the standard groundtruth format. - -* **[ov_secondary](https://github.com/rpng/ov_secondary)** - This is an example secondary thread which provides loop - closure in a loosely coupled manner for [OpenVINS](https://github.com/rpng/open_vins). This is a modification of the - code originally developed by the HKUST aerial robotics group and can be found in - their [VINS-Fusion](https://github.com/HKUST-Aerial-Robotics/VINS-Fusion) repository. Here we stress that this is a - loosely coupled method, thus no information is returned to the estimator to improve the underlying OpenVINS odometry. - This codebase has been modified in a few key areas including: exposing more loop closure parameters, subscribing to - camera intrinsics, simplifying configuration such that only topics need to be supplied, and some tweaks to the loop - closure detection to improve frequency. - - -## Demo Videos - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - -## Credit / Licensing - -This code was written by the [Robot Perception and Navigation Group (RPNG)](https://sites.udel.edu/robot/) at the -University of Delaware. If you have any issues with the code please open an issue on our github page with relevant -implementation details and references. For researchers that have leveraged or compared to this work, please cite the -following: - -```txt -@Conference{Geneva2020ICRA, - Title = {{OpenVINS}: A Research Platform for Visual-Inertial Estimation}, - Author = {Patrick Geneva and Kevin Eckenhoff and Woosik Lee and Yulin Yang and Guoquan Huang}, - Booktitle = {Proc. of the IEEE International Conference on Robotics and Automation}, - Year = {2020}, - Address = {Paris, France}, - Url = {\url{https://github.com/rpng/open_vins}} -} -``` - -The codebase and documentation is licensed under the [GNU General Public License v3 (GPL-3)](https://www.gnu.org/licenses/gpl-3.0.txt). -You must preserve the copyright and license notices in your derivative work and make available the complete source code with modifications under the same license ([see this](https://choosealicense.com/licenses/gpl-3.0/); this is not legal advice). +# open_vins for AR. +## Installation +### Requirements +1. Ubuntu 20.04 +2. OpenCV 4 with contrib and libgdal included: https://docs.opencv.org/4.x/d7/d9f/tutorial_linux_install.html +2. ROS noetic, install only ROS fromm this guide (do not install open_vins): https://docs.openvins.com/gs-installing.html +3. From 991d5c6ccbc4a141f25ac0c60cf0522637fd62aa Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=D0=98=D0=B2=D0=B0=D0=BD=20=D0=9F=D0=BE=D0=B4=D0=BC=D0=BE?= =?UTF-8?q?=D0=B3=D0=B8=D0=BB=D1=8C=D0=BD=D1=8B=D0=B9?= Date: Wed, 31 Aug 2022 17:48:08 +0300 Subject: [PATCH 2/6] Update 'ReadMe.md' --- ReadMe.md | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/ReadMe.md b/ReadMe.md index 04879e5..df36201 100644 --- a/ReadMe.md +++ b/ReadMe.md @@ -4,6 +4,17 @@ ### Requirements 1. Ubuntu 20.04 -2. OpenCV 4 with contrib and libgdal included: https://docs.opencv.org/4.x/d7/d9f/tutorial_linux_install.html -2. ROS noetic, install only ROS fromm this guide (do not install open_vins): https://docs.openvins.com/gs-installing.html -3. +2. OpenCV 3 with contrib and libgdal included: https://docs.opencv.org/4.x/d7/d9f/tutorial_linux_install.html. If catkin buld will be complaining that OpenCV is not found, please set the variable in CMakeLists.txt for each package: set(OpenCV_DIR "path_to_opencv_build_directory") +3. ROS noetic, install only ROS fromm this guide (do not install open_vins): https://docs.openvins.com/gs-installing.html +4. KAIST Urban dataset. On the server: /mnt/disk-small + +### Building +1. `mkdir -p ~/workspace/catkin_ws_ov/src/ && cd ~/workspace/catkin_ws_ov/src` +2. `git clone https://git.drivecast.tech/pi/openvins_linux.git` +3. `cd ../` +4. `catkin build` + +### Running +1. from terminal in `~/workspace/catkin_ws_ov/` directory `source devel/setup.bash` and then `roslaunch ov_msckf subscribe.launch config:=kaist` + + From a190949040f12ab7ef4050388c0ce34664a04780 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=D0=98=D0=B2=D0=B0=D0=BD=20=D0=9F=D0=BE=D0=B4=D0=BC=D0=BE?= =?UTF-8?q?=D0=B3=D0=B8=D0=BB=D1=8C=D0=BD=D1=8B=D0=B9?= Date: Wed, 31 Aug 2022 22:58:30 +0300 Subject: [PATCH 3/6] Update 'ReadMe.md' --- ReadMe.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/ReadMe.md b/ReadMe.md index df36201..ffa8c72 100644 --- a/ReadMe.md +++ b/ReadMe.md @@ -16,5 +16,10 @@ ### Running 1. from terminal in `~/workspace/catkin_ws_ov/` directory `source devel/setup.bash` and then `roslaunch ov_msckf subscribe.launch config:=kaist` +2. from second terminal in same directory `source devel/setup.bash` and then `roslaunch global_fusion global_fusion.launch` +3. from 3d terminal in same directory `source devel/setup.bash` and then `rviz`, and open the open_vins/ov_msckf/launch/display.rviz configuration file. +4. from 4th terminal in same directory `source devel/setup.bash` and then `rviz`, and open the openv_vins/global_fusion/launch/display.rviz` file. + +There is a problem with publishing the cubes to two different topics: to the VIO and to the GPS-VIO frames. I have to find the workaround for this problem. For now you can see the AR cubes in the rviz which shows VIO (from ov_msckf directory) From af9debd464af26dbacb16a21ae1b28fbcd945767 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=D0=98=D0=B2=D0=B0=D0=BD=20=D0=9F=D0=BE=D0=B4=D0=BC=D0=BE?= =?UTF-8?q?=D0=B3=D0=B8=D0=BB=D1=8C=D0=BD=D1=8B=D0=B9?= Date: Wed, 31 Aug 2022 23:03:33 +0300 Subject: [PATCH 4/6] Update 'ReadMe.md' --- ReadMe.md | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/ReadMe.md b/ReadMe.md index ffa8c72..9b221f4 100644 --- a/ReadMe.md +++ b/ReadMe.md @@ -1,5 +1,16 @@ # open_vins for AR. +# Features +1. GPS loosely coupled sensor fusion taken from VINS-Fusion. +2. AR visualizations in ROS Rviz. +3. Ability to save the image sequnece of the camera with rendered AR directly in Rviz. + +## Redmine +Please visit the Redmine notes of the following tasks to see the more detailed overview and the detailed developing process: +https://redmine.drivecast.tech/issues/376 +https://redmine.drivecast.tech/issues/375 +https://redmine.drivecast.tech/issues/374 + ## Installation ### Requirements From 163e6e6a168c2444fe68897d27e7f81035ad6e6d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=D0=98=D0=B2=D0=B0=D0=BD=20=D0=9F=D0=BE=D0=B4=D0=BC=D0=BE?= =?UTF-8?q?=D0=B3=D0=B8=D0=BB=D1=8C=D0=BD=D1=8B=D0=B9?= Date: Wed, 31 Aug 2022 23:03:53 +0300 Subject: [PATCH 5/6] Update 'ReadMe.md' --- ReadMe.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/ReadMe.md b/ReadMe.md index 9b221f4..72f8152 100644 --- a/ReadMe.md +++ b/ReadMe.md @@ -7,9 +7,9 @@ ## Redmine Please visit the Redmine notes of the following tasks to see the more detailed overview and the detailed developing process: -https://redmine.drivecast.tech/issues/376 -https://redmine.drivecast.tech/issues/375 -https://redmine.drivecast.tech/issues/374 +https://redmine.drivecast.tech/issues/376 +https://redmine.drivecast.tech/issues/375 +https://redmine.drivecast.tech/issues/374 ## Installation From e492e9ae93a2673fecdad8aebac8854bc6d35e1c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=D0=98=D0=B2=D0=B0=D0=BD=20=D0=9F=D0=BE=D0=B4=D0=BC=D0=BE?= =?UTF-8?q?=D0=B3=D0=B8=D0=BB=D1=8C=D0=BD=D1=8B=D0=B9?= Date: Wed, 31 Aug 2022 23:07:56 +0300 Subject: [PATCH 6/6] Update 'ReadMe.md' --- ReadMe.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/ReadMe.md b/ReadMe.md index 72f8152..cb4e883 100644 --- a/ReadMe.md +++ b/ReadMe.md @@ -33,4 +33,5 @@ https://redmine.drivecast.tech/issues/374 There is a problem with publishing the cubes to two different topics: to the VIO and to the GPS-VIO frames. I have to find the workaround for this problem. For now you can see the AR cubes in the rviz which shows VIO (from ov_msckf directory) - +### Saving the rendered AR to image sequnece +1. Turn on the CameraPub (TODO: Change the Camera Info Topic in CameraPub to /ov_msckf/cam0_info and save the .rviz config file)) \ No newline at end of file