Automatic control of model helicopter flight by machine vision

Number of pages: 67 File Format: word File Code: 31084
Year: 2014 University Degree: Master's degree Category: Computer Engineering
  • Part of the Content
  • Contents & Resources
  • Summary of Automatic control of model helicopter flight by machine vision

    Computer Engineering Master Thesis (M.Sc)

    Abstract:

    Today, automatic control is widely used in transportation systems. There are several reasons for using this method, the most important of which are cost reduction, the ability to make devices lighter, more safety for the passengers or to keep them away from danger. The last two cases are more visible in systems that have the ability to fly. In this thesis, a helicopter flight control system is designed, which has the capability of remote control based on image processing. The model helicopter is tracked using a camera connected to a computer, using image processing algorithms. After tracking the helicopter, the commands are sent from the computer to the hardware at a transmission rate of 9600 frames per second. The hardware made in this project has a hand-made board to digitize the analog parameters of the radio control as well as the spectrum for receiving commands from the computer. The helicopter is detected using image processing and color tracking algorithms, then the position of the X,Y direction of the helicopter is determined on the screen and displayed. In order to perform high-speed tracking, the methods used for object detection as well as thresholding and segmentation have been used. The detection of the helicopter in the frame was done by the computer webcam and the Microsoft Kinect camera. The implementation of the required programs and algorithms has been done in C++ language in Visual Studio 2013 environment. The image processing was done by the cvInRenge color tracking function available in the OpenCV image processing library. The computer detects the flying helicopter by means of the webcam connected to it, then the computer sends the speed and left and right commands to the radio control through the serial port so that the helicopter is placed in the center of the frame.

    Key words: machine vision, helicopter detection, color recognition, radio control, image processing library

    Chapter 1

      General Research

     

     

     

    Research overview

    The importance of computer vision can be seen through many applications in life. Computer vision has applications such as cancer diagnosis, security programs, biometrics and other things. Image processing today is mostly referred to as digital image processing, which is a branch of computer science that deals with processing digital signals that represent images taken with a digital camera or images scanned by a scanner. Image processing has two main branches: image enhancement and machine vision. Image enhancement includes methods such as using faders and contrast enhancement to improve the visual quality of images and ensure they are displayed correctly in the destination environment (such as a printer or computer monitor), while machine vision deals with methods that can be used to understand the meaning and content of images to be used in tasks such as robotics. The sequence of video images has a specific purpose that must be done with the desired accuracy. Although the history of creating the phenomenon of object tracking goes back to military issues, today, due to the very wide applications of object tracking in various fields, for example, traffic control and detection of unusual movements, this category and its various aspects have received special attention. Among the issues that have always made the performance of tracking algorithms difficult are their interaction with target recognition methods, variable appearance of targets, and simultaneous tracking of multiple targets. In this research, a method with higher productivity and more accurate than the previous methods for tracking objects using webcam imaging is introduced. In this system, the video images taken from the flight of the model helicopter are processed, identified and extracted. The proposed algorithm is divided into several. First, it detects the static background and removes the noise from it. This background is used to subtract moving objects. After that, during a stage of image filtering, the shadows and noises of the filmed image are removed, and finally, moving objects or moving objects are separated, identified and tracked using the bubble routing method. The testing of this system was done on images of a person in the yard and city roads.. The experimental results show that the proposed model for detecting moving objects and tracking them works well and can improve the estimation of movement and movement path of objects in terms of speed and accuracy. Object tracking is used in autonomous aerial vehicles (or drones). Another method used to track aerial vehicles is color recognition, which is used using an image processing algorithm to identify color.

    Definition of the problem and research objectives

    The purpose of this research project is to create a system to control an unmanned helicopter model to place the helicopter in the center of the image received from the webcam connected to the computer. be This helicopter is tracked using a webcam connected to the computer and sends commands to the helicopter in the shortest possible time. Commands include speed and direction. Therefore, it is necessary to use a suitable tracking method to find the helicopter. The final result of this project will be an automatic control helicopter, and perform some pre-defined movements.

    Objectives

    The main objectives of this project are:

    Connecting the radio control board of the model helicopter to the computer to digitize the analog radio control.

    Identifying and tracking the helicopter by image processing to determine the position and orientation.  

    Sending automatic commands from the computer to the helicopter and placing the helicopter in the center of the frame.

    Helicopter landing at the point determined by clicking the mouse on the image frame.

    Organization of the thesis

    In chapter 2, some of the background related to the project is introduced and presented, and the review of previous work done in this field is discussed. In this section, with the aim of some insights, we discuss methods such as subtraction of two images and color identification algorithms that are used for this project. The tools and design used for this project as well as the required software, such as libraries and algorithms, the hardware required to carry out the project are explained in chapter 3. The implementation of the project is mentioned in chapter 4, which shows the technical explanation of the methods used to carry out this project. The system test and the obtained results are given in chapter 5 and each test is described along with its result. The results of the analysis and the capabilities and limitations of the system are discussed, and finally, the conclusion is given in chapter 5, which summarizes the results and implications of this system and future work in chapter 5. In the helicopter class, it pays to be there. The main focus of this project is on image processing and color tracking of the helicopter.

    Autonomous vehicles

    Helicopter control is more difficult than other vehicles due to its complexity [1]. Helicopters are very useful vehicles, however, many real-world applications such as autonomous helicopters have many advantages and uses. They can be used for safety, including assisting pilots in the event of loss of control or providing airspace surveillance. They can also be used to monitor environmental factors, such as volcanoes or weather conditions. Another common use is in dangerous situations such as going into mines or destroying them or moving in contaminated spaces. [1]. At the same time, it can be used in systems used on the surface of seas and oceans. Intelligence and control of autonomous vehicles are generally divided into three categories: internal intelligence, external intelligence, and a combination of internal intelligence and external intelligence. Internal intelligence is when all tracking and navigation is done inside the vehicle. The big challenge in them is to create an autonomous vehicle that can find and move its way on a road in a certain amount of time. For example, the vehicle named Stanley has a GPS system, camera, antenna and connected sensors, and all its intelligence is done inside the system [21]. As another example, the intelligence within the system of an autonomous helicopter model was developed as part of a doctoral thesis [3].

  • Contents & References of Automatic control of model helicopter flight by machine vision

    List:

    Chapter 1. 14

    Generalities of research. 14

    1-1-       Research generalities. 15

    1-1-1- Defining the problem and research objectives. 16

    1-2-       Objectives. 16

    1-3-       Thesis organization. 16

    Chapter 2. 18

    Research background. 18

    2-1-      Research background. 19

    2-2-       Autonomous vehicles. 19

    2-2-1-       Internal intelligence. 19

    2-2-2-       Foreign intelligence. 20

    2-2-3-       Combination of external intelligence with internal intelligence. 20

    2-3- Image processing. 20

    2-4-       Tracking the object. 21

    2-5- Challenges in object tracking. 22

    2-6-       Feature selection. 22

    2-6-1-       Color. 22

    2-6-2-       edge. 23

    2-6-3-       texture. 24

    2-6-4-       Object recognition. 24

    2-6-5-       Background subtraction. 25

    2-6-6- Segmentation of images. 25

    2-7- Edge detection and connection (connection) 26

    2-8- Edge-based active contour models. 26

    2-9- Tracking moving objects. 27

    2-10-     Conclusion. 27

    Chapter 3. 29

    Research execution method, materials and methods 29

    3-1-      Research execution method/materials and methods 30

    3-2-       Software tools. 30

    3-2-1- Control theory. 30

    3-2-2- OpenCV computer vision library. 31

    3-3-                       Thresholding. 32

    3-3-1-                         Binary Thresholding. 32

    3-3-2-                         Inverse Binary Thresholding. 32

    3-3-3-                        Thresholding the threshold to zero. 33

    3-3-4-                        Thresholding threshold inverse to zero. 33

    3-4- Arduino board. 34

    3-5-       Hardware tools. 34

    3-5-1-      Helicopter. 34

    3-5-2- Kinect. 35

    3-5-3- Arduino description. 37

    3-5-4-      Built board 37

    Chapter 4 39

    Executing the system. 39

    4-1-       System implementation. 40

    4-2- Control through Arduino. 40

    4-3-1- Reverse engineering of infrared signals. 40

    4-3-2- PWM modulation. 41

    4-3-3- Infrared closed structure. 41

    4-4-       Kinect depth map accuracy. 43

    4-5- Results obtained 44

    4-6- Stability in air 46

    4-7- Summary. 47

    Chapter 5. 48

    Discussion, conclusions and suggestions. 48

    5-1- Digital potentiometer control. 49

    5-2-       Helicopter control through computer. 49

    5-2-1- Tracking. 50

    5-2-2- Extraction of helicopter position. 50

    5-2-3-     Edge tracking. 51

    5-2-4-     Camera used in the project 51

    5-2-5-      Computer required for the project 52

    5-2-6-      Color recognition method. 52

    5-2-7-     Color recognition and color filter 53

    5-2-8-     Image softener. 54

    5-3-       Hardware communication module with the computer used in the project 54

    5-4-       Project programming environment

    5-5-       OpenCV library. 55

    5-6-       How the project works

    5-7-       Algorithm of the project 56

    5-8-       Summary. 58

    5-9-      Proposals. 58

    List of sources. 60

    5-3-      Appendices. 62

     

     

     

    Source:

     

     

    [1] Aguilar-Ponce, R. (2007), Automated object detection and tracking based on clustered sensor networks. PhD thesis, University of Louisiana. AAI3294839.

     

    [2] Altug, E., Ostrowski, J. P., and Taylor, C. J., (2003), Quadrotor control using dual camera visual feedback. In International Conference on Robotics & Automation, IEEE, pp. 4294-4299.

     

    [3] Amidi, O., Mesaki, Y., and Kanade, T., (1993), Research on an autonomous vision guided helicopter.

     

    [4] Andersen, M., Jensen, T., Lisouski, P., Mortensen, A., Hansen, M., Gregersen, T., and Ahrendt, P. (2012), Kinect depth sensor evaluation for computer vision applications. Tech. rep., Aarhus, Aarhus University.

     

    [5] Benezeth, Y., Jodoin, P., Emile, B., Laurent, H., and Rosenberger C. (2008), Review and evaluation of commonly-implemented background subtraction algorithms. In Pattern Recognition, pp. 1-4.

     

    [6] Fan, J., Yau, D., Elmagarmid, A., and Aref, W. (2001), Automatic image segmentation by integrating color-edge extraction and seeded region growing. IEEE Transaction on Image Processing 1454-1466.

     

    [7] Fieguth, P., and Terzopoulos, D. (1997), Color-based tracking of heads and other mobile objects at video frame rates. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 21-27.

     

    [8] Heath, M., Sarkar, S., Sanocki, T., and Bowyer, K. (1996), Comparison of edge detectors: A methodology and initial study. In Computer Vision and Image Understanding, IEEE Computer Society Press, pp. 38-54.

     

    [9] Jiang, X., and Bunke, H. (1999), Edge detection in range images based on scan line approximation. Computer Vision and Image Understanding 73, 183-199.

     

    [10] McGillivary, P., Sousa, J., Martins, R., Rajan, K., and Leroy, F. (Southampton, UK, 2012), Integrating autonomous underwater vessels, surface vessels and aircraft as persistent surveillance components of ocean observing studies. In IEEE Autonomous Underwater Vehicles. [11] Moeslund, T. B., and Granum, E. (Mar. 2001), A survey of computer vision-based human motion capture. Computer Vision Image Understanding 81, 3 231-268.

     

    [12] Ning, J., Zhang, L., Zhang, D., and Wu, C. (2009), Robust object tracking using joint color texture histogram. IJPRAI, 1245-1263.

     

    [13] Nummiaro, K., Koller-Meier, E., and Gool, L. V. (2003), Color features for tracking nonrigid objects. Special Issue on Visual Surveillance, Chinese Journal of Automation 29, 345-355.

     

     

    [14] OpenCV. (2012), Basic thresholding operations.

     

    [15] Pavlidis, T., and Liow, Y. T. (1990), Integrating region growing and edge detection. IEEE Trans. Pattern Analysis and Machine Intelligence 225-233. [16] Schmid, C. (2005), Introduction into system control. and Breckon, T. P. (2010), Fundamentals of Digital Image Processing: A Practical Approach with Examples in Matlab. Wiley-Blackwell. ISBN-13: 978-0470844731.

     

    [19] Sonka, M., Hlavac, V., and Boyle, R. (1999), Image Processing, Analysis and Machine Vision, 2 ed. Brooks/Cole.

     

    [20] Thrun, S., Montemerlo, M., Dahlkamp, ??H., Stavens, D., Aron, A., Diebel, J., Gale, P. F. J., Halpenny, M., Hoffmann, G., Lau, K., Oak-ley, C., Palatucci, M., Pratt, V., and Stang, P. (2006). Stanley: The robot that won the Darpa grand challenge. Journal of Field Robotics 23, 661-692.

    [21] Xu, R. Y. D., Allen, J. G., and Jin, J. S. (Darlinghurst, Australia, 2004), Robust real-time tracking of nonrigid objects. In Proceedings of the Pan-Sydney area workshop on Visual information processing, VIP '05, Australian Computer Society, Inc., pp. 95-98.

     

    [22] Yilmaz, A., Javed, O., and Shah, M. (2006), object tracking: A survey. ACM Computing Surveys (CSUR) 38, 45.

     

    [23] Arduino Home Page, http://www.arduino.

Automatic control of model helicopter flight by machine vision