Next Article in Journal
Fast and Reliable Determination of Virgin Olive Oil Quality by Fruit Inspection Using Computer Vision
Next Article in Special Issue
Design and Characterisation of a Fast Steering Mirror Compensation System Based on Double Porro Prisms by a Screw-Ray Tracing Method
Previous Article in Journal
Impact of Manufacturing Variability and Washing on Embroidery Textile Sensors
Previous Article in Special Issue
Optical Acceleration Measurement Method with Large Non-ambiguity Range and High Resolution via Synthetic Wavelength and Single Wavelength Superheterodyne Interferometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

V-RBNN Based Small Drone Detection in Augmented Datasets for 3D LADAR System

1
School of Electronics Engineering, Kyungpook National University, Daegu 41566, Korea
2
Hanwha Systems Corporation, Optronics Team, Gumi 39376, Korea
3
Faculty of Computer and Information Science, University of Ljubljana, SI-1000 Ljubljana, Slovenia
4
Agency for Defense Development, Yuseong, Daejeon 34186, Korea
5
Research Center for Neurosurgical Robotic System, Kyungpook National University, Daegu 41566, Korea
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(11), 3825; https://doi.org/10.3390/s18113825
Submission received: 10 October 2018 / Revised: 3 November 2018 / Accepted: 5 November 2018 / Published: 8 November 2018
(This article belongs to the Special Issue Laser Sensors for Displacement, Distance and Position)

Abstract

:
A common countermeasure to detect threatening drones is the electro-optical infrared (EO/IR) system. However, its performance is drastically reduced in conditions of complex background, saturation and light reflection. 3D laser sensor LiDAR is used to overcome the problems of 2D sensors like EO/IR, but it is not enough to detect small drones at a very long distance because of low laser energy and resolution. To solve this problem, A 3D LADAR sensor is under development. In this work, we study the detection methodology adequate to the LADAR sensor which can detect small drones at up to 2 km. First, a data augmentation method is proposed to generate a virtual target considering the laser beam and scanning characteristics, and to augment it with the actual LADAR sensor data for various kinds of tests before full hardware system developed. Second, a detection algorithm is proposed to detect drones using voxel-based background subtraction and variable radially bounded nearest neighbor (V-RBNN) method. The results show that 0.2 m L2 distance and 60% expected average overlap (EAO) indexes are satisfied for the required specification to detect 0.3 m size of small drones.

1. Introduction

The increasing use of compact version of drones in the military, domestic and commercial sectors has raised a lot of privacy and security concerns. As of now, the development of countermeasures for potential drone threats is of great significance. Detecting the drone in the airspace is the first step of defense against them [1,2,3]. The electro-optical/infrared (EO/IR) systems based on the 2D image are efficient for detecting drones in both day and night time but they cannot differentiate between the background and the target cluster when images contain a complex background [4]. Additionally, such systems suffer from thermal image saturation due to which target sometimes overlap with the saturation region limiting the efficiency of the detection [5]. LiDAR is 3D sensor commonly used for outdoor target detection. In contrast to cameras, they provide accurate range information with a larger field of view. LiDAR is widely applied in autonomous vehicle systems and used as a countermeasure to avoid collision with pedestrians or other vehicles on the road [6,7,8,9]. They are by far the most used sensors for simultaneous localization and mapping (SLAM) which enable the robots to safely navigate in the unknown or GPS restricted environment and assists them in performing different complex tasks [10]. The working range of most LiDAR sensors is 100 m. Considering the drone interception and neutralization, this distance is restrained. The more practical approach is detecting the potential threat at about 1 to 2 km which allows enough time for the corresponding system to intercept the drone at the safe distance. To overcome this problem, the laser-based radar system (LADAR) to detect vehicles hundreds of meters away has already been developed [11,12,13]. We aim to develop a new LADAR system with high power, high response, and high resolution to effectively detect the approaching drones at a distance of 2 km. The manufacturing of optical components such as laser source and optics take a long time. Therefore, a long period of time is required to develop such a system. It is very challenging to design a robust and accurate threat detection system when the data from the real sensor is not available.
In this study, we present a technique for data augmentation with existing LiDAR and LADAR sensors for the development and testing of detection framework. A target is generated mimicking the anticipated behavior of the approaching drone in the range of 2 km. The shape, size, and trajectory of the target is simulated considering the optical design of the developing LADAR system [14,15,16]. Taking  into account that the LADAR system is not free from the optical and sensor noise, possible  noises are also included in the data. There are two advantages of designing a drone detection algorithm using augmented data. First, the detection software can be developed in parallel with the hardware. This reduces the overall production time and saves from the hassle of acquiring the data repeatedly from the real sensor as the hardware is modified multiple times during the development process. The augmented data allows the experimentation of various scenarios conveniently and makes the optimization much easier. Second, the ground truth of the target is accurate, and it is straightforward to measure the performance of the detection algorithm. Data recorded in the real operating environment should be visually checked by the trajectory of the target and the developer should make the reference data or mount the high precision RTK GPS sensor on the target and synchronize with the recorded data exactly. This process causes errors and degrades the reliability of the reference data.
Figure 1 shows the overall concept of generating the augmented dataset for various scenarios and designing the detection algorithm using these datasets. In the LADAR data augmentation step, the raw data is acquired by an existing LADAR sensor. The LADAR data is then cropped to get the field of view (FOV) of the developing LADAR sensor. On cropped data, the variational and general blinking noises are added in the background to simulate the effects of clouds and moisture. The time of flight (TOF) sensors, such as LADAR exhibit different point detection characteristics according to beam pattern and divergence angle. Therefore, the number of 3D points and the shape of the target is generated based on the distance of the target analyzing the laser beam characteristics. Next, a trajectory is designed keeping in view the movement of target, clouds, and other moving objects. Finally, the augmented dataset is generated by fusing the designed target shape and trajectory profiles. A visualization tool is used to visualize the augmented data in different colors and it is verified that the dataset exhibits similar nature as the actual situation. The augmented datasets are classified according to different scenarios and a bounding box is added to the absolute target location to calculate the ground truth data. In the target detection step, the detection algorithm can be developed using data designed in the LADAR data augmentation process. The initial map of the location is acquired beforehand. The moving objects and the static scene in the augmented data are separated using the octree-based comparison between the initial and other consecutive frames of the map [17]. Next, the candidate targets are classified by the radially bounded nearest neighbor (RBNN) clustering method [18]. As the characteristics of the target vary with the distance, A new variant of RBNN is proposed in this work which uses variable radius values instead of a single predefined radius to cluster the data. We call this method variable radially bounded nearest neighbor (V-RBNN) clustering. The clustering results can suffer from noise and interference of the objects close to the target. To tackle this issue, clusters are further processed with an outlier removal technique based on minimum points in the radius constraint using nearest neighbor search. During the experimentation, we observed that sometimes the target is not detected, or the detected bounding box is extraordinarily large due to outliers in the augmented datasets with higher noise. To overcome such a situation, sequential target detection is monitored, and a queue of the detection result is maintained to predict the failure. Finally, in the quantitative measurement section, the L2 distance of the center coordinates and the intersection over union (IOU) of the two bounding boxes are compared with the ground truth and the conventional RBNN method to measure the performance of the target detection algorithm. The performance of the overall algorithm can be compared with the average Euclidean (L2) distance and expected average overlap (EAO) values for different scenarios (datasets).

2. Related Works

2.1. EO/IR Imaging System for Drone Detection

EO/IR-based systems are widely used for the detection of potential threats [19,20]. Compared to the normal imaging sensor, EO/IR can detect the target even in the darkness of night. Figure 2 shows the EO/IR images of the drones approaching in a transverse sinusoidal direction at a distance of about 1000 m. There are mountains and roads in the distant background along with a river in the middle of the images. There are trees on the river banks at near range, and a drone is flying in front of the trees. As shown in Figure 2a,c, the drone target is optimally detected where the background is clear. However, it can be seen in Figure 2b the detection fails when the drone approaches the region containing the tree in the background. These conditions frequently occur in the EO/IR operating environment due to the presence of trees, mountains, and clouds in the scene or because of the reflection of the sea surface, sunglint phenomenon or flame of the firing cannon.

2.2. Basic Experiment Using 3D LiDAR

3D sensors can be used to overcome the limitations of EO/IR sensors in drone detection [21]. Figure 3 shows the detection result of a small sized drone of 30 cm 3 using a LiDAR sensor (VLP-16) at different distances. The Angle resolution of the LiDAR used in the experiment is 0.1 in the horizontal direction and 2.0 in the vertical direction. The detection resolution of the drone calculated from the tangent function using Equations (1) and (2) is 0.017 m (1.7 cm) in the horizontal direction and 0.3492 m (34.9 cm) in the vertical direction at distance 10 m.
A Z R E S = tan ( A Z a n g l e × P I 180 ) × R a n g e
E L R E S = tan ( E L a n g l e × P I 180 ) × R a n g e
A Z _ p o i n t s = T a r g e t _ s i z e A Z R E S , E L _ p o i n t s = T a r g e t _ s i z e E L R E S
where, A Z is azimuth and E L is the elevation. A Z R E S is A Z  resolutionand E L R E S is E L  resolution.
The scanning resolution of a 30 cm 3 drone calculated for 10 m using Equation (3) is 17 points (maximum) in the horizontal direction and 1 point in the vertical direction. However, Figure 3a shows that at the distance of 10 m from the LiDAR sensor, the horizontal resolution of the drone is 9 points and the vertical resolution is 1 point. Similarly, it can be seen in Figure 3b the horizontal resolution is 6 points and vertical resolution is 1 point at the distance of 25 m. Figure 3c shows the test result at 50 m. According to the formula the laser scanning resolutions, A Z R E S and E L R E S are 0.0873 m and 1.746 m respectively and the small sized target is represented as 3.4 × 0.17 points, but the actual detected points are 3 × 1 . The resolution reduces as the drone moves farther away from the LiDAR. At 2 km, the A Z R E S and E L R E S of the LiDAR are 3.49 m and 69.84 m and the detectable points are 0.086 × 0.0043 . Hence only one point is predicted with intermittent blinking with low probability. Considering the blinking noise reflected by the moisture in the air, the system cannot distinguish between the noise and target drone at this distance. Thus, the LiDAR sensor has a very low resolution and it is not suitable for detecting drones at long distance.

2.3. Proposed 3D Scanning System (LADAR)

To tackle the aforementioned issues of 2D EO/IR imaging sensor and 3D LiDAR sensor in drone detection, we develop a LADAR system as shown in Figure 4a. The sensor is able to detect high-resolution points within 0.5 × 0.5 by scanning the high-resolution laser pulses at high speed in AZ and EL directions. Figure 4b shows the initial map experimental environment using a prototype model, Figure 4c shows the result of the initial map acquisition, the green guideline denotes the real-time scan view of LADAR. Once the initial map has been generated, LADAR refers to the radar signal and directs it to the approximate position, then a laser starts scanning to detect the target. A wide range of scanning can be performed by rotating the scanner assembly using two-axis gimbal equipped with a servo motor. A 1560 nm laser source with a 1 nsec pulse duration is used to achieve the high resolution. For high-speed scanning, 1 kHz fast mirror galvanometer is used. The pixel size of the light receiving detector unit is 100 μ m, and the optical system is added to the laser detector unit. The arrival beam diameter is designed for a footprint of 100 μ m or less. Considering beam divergence and galvanometer scanning characteristics, the scannable laser beam array is 150 × 20 . Therefore, a 0.5 × 0.5 space (angular) resolution is calculated as 0.025 × 0.003 in the local scan FOV space. Compared to the conventional LiDAR sensors, this performance is 80 times better in the EL direction and 33 times in the AZ direction. LADAR scans the local optical scanning space with a servo motor driver to scan 350 in the AZ direction and 120 in the EL direction. The maximum detectable distance is 2 km, which is 16 times higher than the conventional LiDAR sensor. To detect a target at a distance of 2 km, a laser output of 700 kW pulse peak power and a seed light-based optical fiber amplifying laser with a fast repetition rate of 1000 kHz is used.

2.4. Detection Speed of LADAR

The maximum speed of the detectable drones is limited by the optical scanning speed, the speed of two-axis gimbal motor, and the range between the sensor and the drones. The optical scanning speed of LADAR is up to 20 Hz and the mechanical motor speed is up to 16 /s. The maximum speed of the detectable drones is about 56 m/s (202 km/h) at 200 m. If the distance is 100 m, the maximum detectable drone speed is 28 m/s (101 km/h). The optical scanning speed should be fast, in the case of fast direction switching and zigzag movement, the trajectory information of at least two center coordinates of drones maintain accuracy. At a distance of 200 m, 10 m/s (36 km/h) dynamic flying drones are detected at 3 center coordinates in 20 Hz optical scan condition and 17 center coordinates in 1000 m range.

3. Generation of Targets and Noises

Designing a high-performance LADAR is a long-term project and design changes are inevitable during the implementation phase. Therefore, it is almost impossible to conduct experiments for acquiring laser data frequently during the hardware development. Also, if the implementation of the detection algorithm has started after the hardware production is completed, the entire development period will be very slow. It is also difficult to obtain various reference datasets by performing target detection experiments using actual LADAR. In this section, we analyze the beam characteristics of the laser sensor using the previously acquired reference map data and describe the shape of the target. The augmented data generated in this procedure is used in the process of developing the target detection algorithm.

3.1. Laser Beam Analysis

Accurate analysis of laser beam is a critical step to model the augmented data for LADAR sensor. The number of 3D points of the target can be predicted by calculating the interval and number of laser pulses detected in the FOV considering the aforementioned beam divergence angle and the scanning resolution of LADAR.
Figure 5a shows the shape of a small drone with a volume of 30 cm 3 as a detected target and its dense region. Figure 5b illustrates the intersection of the vertical and horizontal direction of the laser beam. The laser beam interval and the angle with respect to the crossing region can be seen in the same figure. The intersection angle of the vertical direction (EL direction) is 0.003 , and the intersection angle of horizontal (AZ direction) is 0.011 . When the FOV in the AZ direction and the EL direction is 0.5 , the length (m) of the FOV is changed by the distance and can be simply obtained by Equation (4). If the scan resolution in the AZ direction is set to S R A Z = 150 and S R E L = 20 in the EL direction, the number of lines occupied by the target can be obtained using Equation (5) where, T m is length (m) of the target and R O U N D U P is a function to derive integer values. If the beam divergence angle is designed to be B d e g = 0.62 m r a d (0.035 ), the length information ( B I A Z , B I E L ) for the intersection area of the beam can be calculated by considering S R A Z and A R E L using Equation (6). The length considering the detection area added to the beam intersection area can be calculated as in Equation (7), and the number of 3D points in the final target can be calculated by Equation (8).
F O V m = 2 × tan ( F O V × P I 180 × 0.5 ) × R m
B L A Z = R O U N D U P ( T m F O V m / S R A Z ) , B L E L = R O U N D U P ( T m F O V m / S R E L )
B I A Z = ( B d e g × S R A Z ) F O V d e g ( S R A Z 1 ) , B I E L = ( B d e g × S R E L ) F O V d e g ( S R E L 1 )
T B I A Z = ( T m + B I A Z m ) × 2 , T B I E L = ( T m + B I E L m ) × 2
T P = R O U N D U P ( T B I A Z × ( S R A Z F O V m ) × T B I E L × ( S R E L F O V m ) )
Table 1 shows the results of the aforementioned calculations with respect to the distance. In the case of 200 m, it is predicted that the target is detected by 20 × 3 lines and represented by 105 3D points by adding an additional point for crossing region of the beam. Consequently, the target will be detected by 2 × 1 lines at 2 km and the result reflecting the beam crossing area is represented by 17 points. The LADAR can represent 17 AZ and 1 EL points compared to the LiDAR sensor which represents 0.086 AZ and 0.0043 EL points at the distance of 2 km. Figure 5c shows the result of projecting the point characteristics of the final detected target.

3.2. Shape of Target and Noise

To generate the target, first the number of 3D points contain by the target is calculated, then the background shape is created according to the number of 3D points. Finally, the generated shape of the target is verified. Figure 6a shows the result of designing the shape of the target at the middle distance of about 500 m along with other background (disturbance) elements. Figure 6b shows the result at a distance of 1 km or more. Since the shape of the target is randomly distributed during the experiment using LiDAR, the characteristics of the random distribution are added in consideration of the size of the target and the number of points by the distance. Therefore, different profiles are created for different datasets.

3.3. Trajectory Design

The datasets for the experiments are designed assuming different trajectories of the target. Different datasets are created by first keeping the same shape of the target with variation in trajectories and then by varying the shape of targets keeping the same trajectory. The blue line in Figure 7a is the trajectory plan for the target movement and the red color shows the result of the generation of the motion characteristics of different fake targets. The lower frequency of the trajectory of the fake target simulates a slowly moving object such as a cloud, and the red line moving in a linear pattern simulates the noise distributed near the target. Figure 7b shows the shape of the target corresponding to a frame. Blue dots represented by two lines correspond to the target, and red dots denote various types of noise. Figure 7c shows a data set that is fused to generate the trajectory profile with target information and background information. The part represented by jet color map is the background, and the part shown in blue is the data with the target and added noise.

4. Augmentation and Visualization

The success of the detection algorithm depends on the augmented datasets generated using the background data. It is necessary to verify if the generated datasets meet the desired requirements. It should be confirmed that the shape and movement of the target are appropriate, and interference of the noise is similar to the imitated situations. To examine the datasets, we designed a web-based visualization tool to visualize and verify the augmented data classified into four datasets regarding the speed of the target and the amount of noise in the data. Figure 8 shows the visualization result and the description of each dataset. Dataset #1 contains the profile of the fast-moving drone target which moves in a winding path and sparse random 3D points to simulate the interference of small clouds and sensor noise. The speed of the target is slow in dataset #2 and dense 3D points are added close to the target to create the effect of birds and other flying objects around the target. Dataset #3 is generated to test the performance of the detection algorithm for fast-moving targets with dense or coarse noise that interfere with one another. In dataset #4, the interference of large planes, thick clouds, and high buildings is imitated with the slow-moving target. The generated datasets after the assessment in the visualization tool can be used for the algorithm development.

5. Target Detection

The overall process of designing a small-sized drone target detection algorithm using augmented data for the experiment is shown in Figure 1. Algorithm 1 describes the algorithm for target detection that works as follows. First, in the background subtraction stage, we use B a c k g r o u n d _ p o i n t X Y Z and C u r r e n t _ p o i n t X Y Z to compare the initial map with input data to segment out the same static objects in the scene. The output of this operation is the c l o u d _ O c t r e e (further described in Section 5.1). In the V-RBNN step, we estimate the distance from the target based on the history of detection and store it as c a l R a n g e t . r a d i u s is obtained by taking the product of h s c a n , the number of horizontal scan lines, and v s c a n , the number of vertical scan lines, and the length of the LADAR FOV, using t a n ( ) function, with respect to c a l R a n g e t . At this point, the λ and b i a s constant parameters can be adjusted depending on the detection target size. After clustering, the outliers are removed with the minimum number of neighbor points, m M i n within a predefined radius. In the experimental results, m M i n is set to 3. To overcome the interference that occurs when large clusters other than the target come into the sensing area after the outliers have been removed, we check the size of the target based on the diagonal length of the detected bounding box. In case of abnormally large diagonal condition diagBB > 1 m, the relevant detected bounding box is excluded. Finally, there are rare occasions when no target points are found, or abnormal detection results are obtained. In this case, an exceptional situation is detected, and the current target location is predicted using the history of previous target detection based on a finite impulse response (FIR) filtering method [22].
Algorithm 1: The V-RBNN-based target detection.
1:  // Background subtraction

2:         P o i n t X Y Z : : c l o u d _ B a s e = B a c k g r o u n d _ p o i n t X Y Z

3:         P o i n t X Y Z : : c l o u d _ C u r = C u r r e n t _ p o i n t X Y Z

4:         c l o u d _ O c t r e e O c t r e e C h a n g e D e t e c t o r ( P o i n t X Y Z )

5:  // V-RBNN

6:         c a l R a n g e t T a r g e t R a n g e C a l ( h i s t o r y _ q u e _ c e n B B _ x y z [ i ] )

7:         λ = 7.0 , b i a s = 0.01    (//tunable constant parameters)

8:         r a d i u s = λ / ( h s c a n × v s c a n ) × tan ( F O V × ( π / 180 ) ) × c a l R a n g e t + b i a s

9:         c l o u d _ c l u s t e r S e t R B N N R a d i u s ( c l o u d _ O c t r e e , r a d i u s )

10: // Outlier and occlusion removal

11:       O u t l i e r _ r e m o v e s e t M i n N e i g h b o r s I n R a d i u s ( c l o u d _ c l u s t e r , n M i n )

12:      if (  d i a g B B > 1  )

13:          O c c l u s i o n _ r e m o v e = O u t l i e r _ r e m o v e m a x D i a g B B

14:      else

15:          c l o u d _ c l u s t e r _ t a r g e t = O u t l i e r _ r e m o v e

16: // Sequential position estimation

17:      if (  c a l R a n g e t < 1 | | ( c a l R a n g e t c a l R a n g e t 1 ) > 1000  )

18:          F i n a l _ t a r g e t _ B B e s t i m a t e B B ( h i s t o r y _ q u e _ c e n B B _ x y z [ i ] )

19:      else

20:          F i n a l _ t a r g e t _ B B d r a w B B ( c l o u d _ c l u s t e r _ t a r g e t )

5.1. 3D Background Subtraction

When the initial 3D map of an environment is generated, the spatial change can be realized by comparing it with new arriving frames in real time [9,23]. We used the octree-based occupancy grid representation method [24] to distinguish between the constant part of the scene and the moving target. The octree is a kind of tree data structure in which each node has eight child nodes. The octree is widely adopted to partition the 3D spaces because it is computationally efficient for such data [25,26,27]. The voxel-based octree structure of the two point clouds (initial or reference map and the receiving frames) are compared and the resulting point cloud is the moving target with a lot of clutter. This point cloud is then processed to differentiate between the target and the noise.

5.2. Variable Radially Bounded Nearest Neighbor (V-RBNN)

There are several approaches for clustering the data in 3D space [28,29,30]. A representative approach is the nearest neighbor search, we employed a radially bounded nearest neighbor graph method (RBNN) [18]. This is the modified form of nearest neighbor graph method in which every node is connected to all the neighbors present in a predefined r a d i u s . The RBNN is fast and can be used to cluster the data in real time because for every node the nearest neighbor query is not required eliminating the need for rearranging the graphs. However, the original RBNN cannot be applied for the clustering in long-range sensors. Due to the varying nature of shape and size of the target acquired by the LADAR sensor, a fixed radius cannot be defined for all the distances. The target has a dense shape and a large number of 3D points when it is closer to the sensor. Clustering with the smaller radius in such case will result in the no detection (failure) or the detection of noise as a target cluster. Conversely, the large radius will cause the detection of outliers with the target when it is approaching from a long distance. To effectively cluster the targets ranging from 1 to 2 km, a variable radially bounded nearest neighbor (V-RBNN) method is proposed considering the distribution of the targets varying by the approaching distance. Figure 9a shows the distribution of the shape of the near, medium, and long distance target points and the noise as a reference. Figure 9b shows that when the length of the search radius is set to 0.06 m, the performance for the medium-range remote target results in failure. If the radius is set to 0.2 m as shown in Figure 9c, the detection performance deteriorates. We adjusted the radius value adaptively according to the distance of the target as shown in Figure 9d so that clustering performance is maintained even if the target point distribution is varied. The green dot is the part of the target to be judged, and the red is the noise (clutter). An orange circle indicates the optimal clustering of the target, and a gray circle denotes the clustering failures, or a non-target noise detected as a target.

5.3. GUI Software and Experiments

For the implementation and testing of the target detection algorithm, a graphical user interface (GUI) software is designed as shown in Figure 10. The GUI reads the augmented data displayed in the large window on the upper left where green lines express the FOV of LADAR. The motion of the reference target is marked as a red bounding box. Below the big windows, data loading button and parameter input boxes to adjust the camera view are located. The small window on the upper right shows the area of the FOV detected by LADAR in real time and the movement of the reference target in a red color bounding box. The small window on the bottom right shows the final target detection area as a yellow bounding box. We verified the results of the bounding box detection of the right bottom windows (above: ground truth, below: detection result) in the final implemented GUI experimental environment. Figure 11a is the #1 experimental dataset identified in Figure 8a and Figure 11d is the #1 experimental dataset identified in Figure 8c. Figure 11b,e shows the baseline target, the existing RBNN method, and the BB detection result of the proposed method in each dataset. Figure 11c,f are the L2 distances between the target center coordinates of the proposed method. In Figure 11b,e, the existing RBNN method detects the target including fake targets when they are close and detects BB abnormally. Therefore, the L2 distance of the center coordinate detection result of the target becomes larger. The proposed method shows that this problem can be solved.

6. Quantitative Measurement

It is important to quantitatively evaluate the performance of the target detection algorithm. The GUI software used in the experiments was designed to save log files for target detection results. There are two indexes used for quantitative measurements. First, the error between the position of the reference target and the center coordinates of the target position of the detection result was measured as L2 distance. Figure 12a,e,i,m shows the L2 distance of the target detection result using the conventional RBNN method. Figure 12b,f,j,n shows the results for the proposed method. Compared with the conventional RBNN method, the error is significantly lower and stable characteristics are shown. The second index is the IOU. This shows how the target detection bounding box differs from the reference area. The L2 distance is a good property when the response value is small, whereas the IOU is good when the value is close to 1. Figure 12c,g,k,o shows the EAO (Estimated Average Overlap) measurement results for the target detection using the conventional RBNN method. Figure 12d,h,l,p shows the results for the proposed method. Figure 12 shows that the proposed algorithm is superior in performance, but the IOU measurement data becomes wider because of the increase and decrease of each frame. In this case, the average algorithm performance can be evaluated by measuring the EAO. Figure 13a shows the average L2 distance. Figure 13b shows that the accuracy of the V-RBNN is 0.6 to 0.7 which is almost 2 X better than the conventional RBNN method.

7. Conclusions

The first contribution of this paper is that we presented a data augmentation method for the design and testing of drone detection algorithm without real sensor data. The proposed method can be employed to simulate output of any LADAR sensor by changing the sensor characteristics to achieve the desired shape and size of the drones and the resolution of 3D points can be adjusted accordingly. A diverse set of data can be generated for the experimentation and design of the detection framework. The implementation of long-range detection algorithm using a LADAR sensor model and augmented dataset for small drones is the second contribution of this paper. The given method can detect drones at the range of almost 2 km. A new clustering method (V-RBNN) is investigated as the conventional clustering method (RBNN) is not effective to classify the target due to variation in drone shape and size at the different distances. Results demonstrates that the V-RBNN substantially improve the accuracy of the clustering. We expect that this study will be contributed to development of detection algorithms for 3D sensor systems in various applications. Also, it can be widely used as a technique to quickly optimize its software before hardware development is completed.
The distinction between birds and drones is a challenging task in small drone detection system. This issue is in our research pipeline and in the future, we will acquire datasets including birds and drones after the development of actual LADAR system. We consider probability-based intelligent classification methods using analysis of flight behavior of birds and drones.

Author Contributions

Conceptualization, B.H.K. and M.Y.K.; Methodology, B.H.K., D.K. and W.C.; Software, B.H.K., D.K. and W.C.; Validation, M.Y.K. and C.B.; Formal Analysis, B.H.K., D.K. and C.B.; Investigation, M.Y.K.; Resources, B.H.K., H.J.L. and M.Y.K.; Data Curation, B.H.K., M.Y.K. and C.B.; Writing Original Draft Preparation, B.H.K.; Writing Review & Editing, D.K., C.B. and M.Y.K.; Visualization, B.H.K. and C.B.; Supervision, M.Y.K.; Project Administration, M.Y.K.; Funding Acquisition, B.H.K., W.C., H.J.L. and M.Y.K.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2016R1D1A3B03930798). This work was supported by a grant-in-aid of HANWHA SYSTEMS (U-17-014). This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (2016-0-00564, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding). This study was supported by the BK21 Plus project funded by the Ministry of Education, Korea (21A20131600011). This work was supported by the DGIST R&D Program of the Ministry of Science, ICT and Future Planning(17-ST-01).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guvenc, I.; Koohifar, F.; Singh, S.; Sichitiu, M.L.; Matolak, D. Detection, tracking, and interdiction for amateur drones. IEEE Commun. Mag. 2018, 56, 75–81. [Google Scholar] [CrossRef]
  2. Solodov, A.; Williams, A.; Al Hanaei, S.; Goddard, B. Analyzing the threat of unmanned aerial vehicles (UAV) to nuclear facilities. Secur. J. 2018, 31, 305–324. [Google Scholar] [CrossRef]
  3. Drozdowicz, J.; Wielgo, M.; Samczynski, P.; Kulpa, K.; Krzonkalla, J.; Mordzonek, M.; Bryl, M.; Jakielaszek, Z. 35 GHz FMCW drone detection system. In Proceedings of the 2016 17th International Radar Symposium (IRS), Krakow, Poland, 10–12 May 2016. [Google Scholar]
  4. Müller, T. Robust drone detection for day/night counter-UAV with static VIS and SWIR cameras. In Proceedings of the Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR VIII. International Society for Optics and Photonics, Anaheim, CA, USA, 10–13 April 2017; Volume 10190, p. 1019018. [Google Scholar]
  5. Cardone, D.; Merla, A. New frontiers for applications of thermal infrared imaging devices: Computational psychopshysiology in the neurosciences. Sensors 2017, 17, 1042. [Google Scholar] [CrossRef] [PubMed]
  6. Wang, H.; Wang, B.; Liu, B.; Meng, X.; Yang, G. Pedestrian recognition and tracking using 3D LiDAR for autonomous vehicle. Robot. Auton. Syst. 2017, 88, 71–78. [Google Scholar] [CrossRef]
  7. Premebida, C.; Ludwig, O.; Nunes, U. LIDAR and vision-based pedestrian detection system. J. Field Robot. 2009, 26, 696–711. [Google Scholar] [CrossRef] [Green Version]
  8. Musleh, B.; García, F.; Otamendi, J.; Armingol, J.M.; De la Escalera, A. Identifying and tracking pedestrians based on sensor fusion and motion stability predictions. Sensors 2010, 10, 8028–8053. [Google Scholar] [CrossRef] [PubMed]
  9. Azim, A.; Aycard, O. Detection, classification and tracking of moving objects in a 3D environment. In Proceedings of the 2012 IEEE Intelligent Vehicles Symposium (IV), Alcala de Henares, Spain, 3–7 June 2012; pp. 802–807. [Google Scholar]
  10. Lenac, K.; Kitanov, A.; Cupec, R.; Petrović, I. Fast planar surface 3D SLAM using LIDAR. Robot. Auton. Syst. 2017, 92, 197–220. [Google Scholar] [CrossRef]
  11. Morris, D.D.; Colonna, B.; Haley, P. Ladar-based mover detection from moving vehicles. Gen. Dyn. Robot. Syst. 2006, arXiv:1709.08515. [Google Scholar]
  12. Bogoslavskyi, I.; Stachniss, C. Fast range image-based segmentation of sparse 3D laser scans for online operation. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 163–169. [Google Scholar]
  13. Laurenzis, M.; Hengy, S.; Hommes, A.; Kloeppel, F.; Shoykhetbrod, A.; Geibig, T.; Johannes, W.; Naz, P.; Christnacher, F. Multi-sensor field trials for detection and tracking of multiple small unmanned aerial vehicles flying at low altitude. In Proceedings of the Signal Processing, Sensor/Information Fusion, and Target Recognition XXVI. International Society for Optics and Photonics, Anaheim, CA, USA, 11–12 April 2017; Volume 10200, p. 102001. [Google Scholar]
  14. Anjum, N.; Cavallaro, A. Multifeature object trajectory clustering for video analysis. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 1555–1564. [Google Scholar] [CrossRef]
  15. Navarro-Serment, L.E.; Mertz, C.; Hebert, M. Pedestrian detection and tracking using three-dimensional ladar data. Int. J. Robot. Res. 2010, 29, 1516–1528. [Google Scholar] [CrossRef]
  16. Kim, B.H.; Khan, D.; Bohak, C.; Kim, J.K.; Choi, W.; Lee, H.J.; Kim, M.Y. Ladar data generation fused with virtual targets and visualization for small drone detection system. In Proceedings of the Technologies for Optical Countermeasures XV. International Society for Optics and Photonics, Berlin, Germany, 10–13 September 2018; Volume 10797, p. 107970. [Google Scholar]
  17. Girardeau-Montaut, D.; Roux, M.; Marc, R.; Thibault, G. Change detection on points cloud data acquired with a ground laser scanner. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2005, 36, W19. [Google Scholar]
  18. Klasing, K.; Wollherr, D.; Buss, M. A clustering method for efficient segmentation of 3D laser data. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 4043–4048. [Google Scholar]
  19. Ling, B.; Agarwal, S.; Olivera, S.; Vasilkoski, Z.; Phan, C.; Geyer, C. Real-time buried threat detection and cueing capability in VPEF environment. In Proceedings of the Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XX, International Society for Optics and Photonics, Baltimore, MD, USA, 20–24 April 2015; Volume 9454, p. 94540. [Google Scholar]
  20. Busset, J.; Perrodin, F.; Wellig, P.; Ott, B.; Heutschi, K.; Rühl, T.; Nussbaumer, T. Detection and tracking of drones using advanced acoustic cameras. In Proceedings of the Unmanned/Unattended Sensors and Sensor Networks XI, and Advanced Free-Space Optical Communication Techniques and Applications International Society for Optics and Photonics, Toulouse, France, 23–24 September 2015; Volume 9647, p. 96470. [Google Scholar]
  21. Mirčeta, K.; Bohak, C.; Kim, B.H.; Kim, M.Y.; Marolt, M. Drone segmentation and tracking in grounded sensor scanned LiDAR datasets. In Proceedings of the Zbornik sedemindvajsete mednarodne Elektrotehniške in računalniške konference ERK 2018, Portorož, Slovenija, 17–18 September 2018; pp. 384–387. [Google Scholar]
  22. Pak, J.M.; Kim, P.S.; You, S.H.; Lee, S.S.; Song, M.K. Extended least square unbiased FIR filter for target tracking using the constant velocity motion model. Int. J. Control Autom. Syst. 2017, 15, 947–951. [Google Scholar] [CrossRef]
  23. Shackleton, J.; VanVoorst, B.; Hesch, J. Tracking people with a 360-degree lidar. In Proceedings of the 2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance, Boston, MA, USA, 29 August–1 September 2010; pp. 420–426. [Google Scholar]
  24. Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Auton. Robot. 2013, 34, 189–206. [Google Scholar] [CrossRef]
  25. Fournier, J.; Ricard, B.; Laurendeau, D. Mapping and exploration of complex environments using persistent 3D model. In Proceedings of the Fourth Canadian Conference on Computer and Robot Vision (CRV ’07), Montreal, QC, Canada, 28–30 May 2007; pp. 403–410. [Google Scholar]
  26. Jessup, J.; Givigi, S.N.; Beaulieu, A. Merging of octree based 3d occupancy grid maps. In Proceedings of the 2014 IEEE International Systems Conference Proceedings, Ottawa, ON, Canada, 31 March–3 April 2014; pp. 371–377. [Google Scholar]
  27. Shen, Y.; Lindenbergh, R.; Wang, J. Change analysis in structural laser scanning point clouds: The baseline method. Sensors 2016, 17, 26. [Google Scholar] [CrossRef] [PubMed]
  28. Aijazi, A.K.; Checchin, P.; Trassoudaine, L. Segmentation based classification of 3D urban point clouds: A super-voxel based approach with evaluation. Remote Sens. 2013, 5, 1624–1650. [Google Scholar] [CrossRef]
  29. Dohan, D.; Matejek, B.; Funkhouser, T. Learning hierarchical semantic segmentations of LIDAR data. In Proceedings of the 2015 International Conference on 3D Vision, Washington, DC, USA, 19–22 October 2015; pp. 273–281. [Google Scholar]
  30. Brodu, N.; Lague, D. 3D terrestrial lidar data classification of complex natural scenes using a multi-scale dimensionality criterion: Applications in geomorphology. ISPRS J. Photogramm. Remote Sens. 2012, 68, 121–134. [Google Scholar] [CrossRef] [Green Version]
Figure 1. A flow chart for the small drone detection system.
Figure 1. A flow chart for the small drone detection system.
Sensors 18 03825 g001
Figure 2. Drone detection using EO/IR imaging system at different background condition. (a,c) detection success in normal background, (b) detection fail in complex background.
Figure 2. Drone detection using EO/IR imaging system at different background condition. (a,c) detection success in normal background, (b) detection fail in complex background.
Sensors 18 03825 g002
Figure 3. Limitation of target detection using 3D LiDAR sensor system. (a) Range = 10 m, 9 points detection, (b) Range = 25 m, 6 points detection, (c) Range = 50 m, 3 points detection.
Figure 3. Limitation of target detection using 3D LiDAR sensor system. (a) Range = 10 m, 9 points detection, (b) Range = 25 m, 6 points detection, (c) Range = 50 m, 3 points detection.
Sensors 18 03825 g003
Figure 4. LADAR concept and the example dataset of an airport. (a) LADAR hardware design and scanning view, (b) experimental environment (airport), (c) an example sensing result.
Figure 4. LADAR concept and the example dataset of an airport. (a) LADAR hardware design and scanning view, (b) experimental environment (airport), (c) an example sensing result.
Sensors 18 03825 g004
Figure 5. Characteristics analysis of LADAR laser beam and calculation for equivalent target projection points. (a) Dense area of target, (b) beam divergence and intersection, (c) detectable target projection points.
Figure 5. Characteristics analysis of LADAR laser beam and calculation for equivalent target projection points. (a) Dense area of target, (b) beam divergence and intersection, (c) detectable target projection points.
Sensors 18 03825 g005
Figure 6. Shape of targets and noises at different ranges. (a) target shape at middle range, (b) target shape at far range.
Figure 6. Shape of targets and noises at different ranges. (a) target shape at middle range, (b) target shape at far range.
Sensors 18 03825 g006
Figure 7. Trajectory design for targets and noises. (a) trajectory design, (b) target shape design, (c) generation of augmented experimental dataset.
Figure 7. Trajectory design for targets and noises. (a) trajectory design, (b) target shape design, (c) generation of augmented experimental dataset.
Sensors 18 03825 g007
Figure 8. Numbers of visualization result of augmented datasets and its descriptions.
Figure 8. Numbers of visualization result of augmented datasets and its descriptions.
Sensors 18 03825 g008
Figure 9. Basic concept of variable radial basis nearest neighbor clustering method. (a) Ground truth, (b) RBNN (R = 0.06 m), (c) RBNN (R = 0.2 m), (d) Proposed.
Figure 9. Basic concept of variable radial basis nearest neighbor clustering method. (a) Ground truth, (b) RBNN (R = 0.06 m), (c) RBNN (R = 0.2 m), (d) Proposed.
Sensors 18 03825 g009
Figure 10. GUI software environment for small target detection.
Figure 10. GUI software environment for small target detection.
Sensors 18 03825 g010
Figure 11. Experimental results of small drone detection. (a) detection result using dataset #1, (b) bounding box results for each frames, (c) Euclidean distances between sequential targets, (d) detection result using dataset #3, (e,f) show bounding box and Euclidean.
Figure 11. Experimental results of small drone detection. (a) detection result using dataset #1, (b) bounding box results for each frames, (c) Euclidean distances between sequential targets, (d) detection result using dataset #3, (e,f) show bounding box and Euclidean.
Sensors 18 03825 g011
Figure 12. Euclidian distance and intersection over union (IOU) measurement. (ad) result using dataset #1, (eh) result using dataset #3, (il) result using dataset #2, (mp) result using dataset #4.
Figure 12. Euclidian distance and intersection over union (IOU) measurement. (ad) result using dataset #1, (eh) result using dataset #3, (il) result using dataset #2, (mp) result using dataset #4.
Sensors 18 03825 g012
Figure 13. Overall performance measurement (a) Average L2, (b) expected average overlap (EAO).
Figure 13. Overall performance measurement (a) Average L2, (b) expected average overlap (EAO).
Sensors 18 03825 g013
Table 1. Calculation for target projection points in different ranges.
Table 1. Calculation for target projection points in different ranges.
RangeFOVTarget AngleAZ LinesEL LinesAZ + BIEL + BITarget
(R_m)(FOV_m)(T_deg)(BL_AZ)(BL_EL)(TBI_AZ)(TBI_EL)(TP)
2002.270.0862030.5190.322105
5005.670.034820.8480.35546
8009.080.021511.1770.38920
110012.480.016411.5060.42219
140015.880.012311.8340.45518
170019.290.010312.1630.48917
200022.690.009212.4920.52217

Share and Cite

MDPI and ACS Style

Kim, B.H.; Khan, D.; Bohak, C.; Choi, W.; Lee, H.J.; Kim, M.Y. V-RBNN Based Small Drone Detection in Augmented Datasets for 3D LADAR System. Sensors 2018, 18, 3825. https://doi.org/10.3390/s18113825

AMA Style

Kim BH, Khan D, Bohak C, Choi W, Lee HJ, Kim MY. V-RBNN Based Small Drone Detection in Augmented Datasets for 3D LADAR System. Sensors. 2018; 18(11):3825. https://doi.org/10.3390/s18113825

Chicago/Turabian Style

Kim, Byeong Hak, Danish Khan, Ciril Bohak, Wonju Choi, Hyun Jeong Lee, and Min Young Kim. 2018. "V-RBNN Based Small Drone Detection in Augmented Datasets for 3D LADAR System" Sensors 18, no. 11: 3825. https://doi.org/10.3390/s18113825

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop