Sunday, December 20, 2015

Ethical and Moral Issues in UAS


Ethical and Moral Issues in UAS
Shawn M. Wyne
ASCI 638 – Human Factors in Unmanned Systems
Embry-Riddle Aeronautical University
Abstract
           Over the last decade, Unmanned Aerospace Systems (UAS) have become a significant part of warfare. They are a tool for accomplishing a number of military objectives, including employing weapons. This paper compares and contrasts the moral and ethical uses of UAS with those of more traditional weapons.
Keywords: UAS, remote warfare, ethics
Morality in UAS
Unmanned aircraft have been in use since almost the beginning of flight. Early versions were even weaponized, made into remote controlled unmanned suicide machines. But for decades, their use was limited by general technology, and they were never employed in any measurable number, nor were they particularly effective. The biggest gains in unmanned systems came in those of satellites throughout the Cold War. But in the last two decades this niche technology has grown and expanded into a useful industry and unmanned aircraft have become the norm for governments and militaries across the globe. As with all new technologies, its emergence raises new concerns to whether it should be used.
Whether or not to use a weapon requires first a discussion on what is legally allowed. UAS themselves are not actually a weapon, but they are a platform for employing weapons. In their current state, they employ the same weapons that are employed from other aircraft. Since the weapons themselves are allowed, then UAS are technically allowed as well. The second point to consider is the targeting employed by UAS. At least for the US military, targeting requirements are identical for manned and unmanned aircraft. Primary rules for legal targeting are to follow the principles of discrimination, proportionality, and military necessity (Johansson 2011). The laws of armed conflict apply no matter the type of weapon or the platform that employs it. If a target is legal, then it can be struck by UAS or manned aircraft alike.
The real concern over the use of UAS is the level of ease at which force is applied. The evolution of military technology has always embraced methods to inflict damage onto the enemy while limiting risk to oneself. From archers, to rifles, to artillery, to cruise missiles, distance affords the attacker a level of safety. The risk of harm to one’s own forces is often a deterring factor for warfare. A risk exists that increasing the ease with which one can engage in warfare will increase its use. I think this is a false analogy, however, for two reasons. First, is that not all forces abide by this logic in the first place. Dictators may send their forces to battle with no regard for their losses, or peoples like Japan in WWII or currently in North Korea will fight without question if told to do so, without any reservation. Secondly, nations that are considered more reasonable in modern times also must consider the open nature of the global community. Just because going to war might be easier, there are still consequences that cannot be avoided, even if those consequences are not direct military retaliation. I do not think UAS themselves will lend nations to more easily engage in warfare.
An additional concern is the risk of abuse of the new technology. I think that is a legitimate concern, but not one that requires any special care. All military technologies can be abused by those who wield them. Additionally, the States that may be inclined to use this type of technology against others, or even its own people, are not particularly considerate of rules anyway. Some States already abuse their power with more traditional tools. I do not believe an otherwise just government will begin to abuse their citizens just because of this additional technology.
The US Air Force has made a deliberate effort to re-brand UAS by calling them Remotely Piloted Aircraft (RPAs). This is an important distinction that the aircraft, even though there is not human inhabiting it, is still under deliberate control of an operator. The aircrew are trained to fly UAS in the same ways they are trained to fly manned aircraft. As such, they follow the same rules and procedures as any other aircraft. But they can do some things better than ever before. A fighter jet striking a target may only have minutes to identify a target and decide to employ weapons. In some cases the pilot never sees the target at all, but relies on someone else passing a coordinate and the fighter blindly release a GPS guided weapon. There are procedures that must be followed, but it is one way they operate. UAS have the ability to operate for much longer periods of time. A single UAS might orbit a target area for more than 40 hours, and may be relieved by subsequent UAS in turn for days or weeks of constant coverage. Using very detailed cameras, the UAS can collect information on a target like never before. Not only can they be surer of a target’s validity, they can be extremely precise in determining the level of collateral damage that might result from an attack. I have personally seen many valid targets not be struck at all because a UAS could identify civilians in the area, and have seen UAS wait for days until a clear shot is available. The level of detail provided by UAS allows a level of restraint not possible with manned aircraft. A fighter may have only minutes to decide whether to strike, and if they don’t the target may be lost. Even though, legally speaking, collateral damage may be acceptable, the global community is insisting that modern warfare limit civilian casualties to almost none. Avoiding “international censure” has become a deliberate goal of governments (Kreps & Kaag 2012). This task is made more difficult without using UAS.
UAS, like manned aircraft, missiles, artillery, and tanks, are a tool for accomplishing military objectives in warfare. They still follow the same laws of armed conflict applicable to all other fighting. They also allow more precision and surety in an environment that historically contains neither. As long as they are used responsibly, UAS are no more of a moral risk than any other long-distance conventional weapons that have come before them. The issues of when to use force are still a point of argument and discussion, but which tools to use should not exclude UAS simply because they might be misused.
References
Johansson, L. (2011, June 10). Is it Morally Right to Use Unmanned Aerial Vehicles (UAVs) in War? doi: 10.1007/s13347-011-03308
Kreps, S. and Kaag, J. (2012 Apr). The Use of Unmanned Aerial Vehicles in Contemporary Conflict: A Legal and Ethical Analysis. Polity 44.2: 260-285.
 
 

ASCI 638 end of course

For the Embry Riddle Aeronautical University Master's course ASCI 638, I wrote a case analysis of UAS GCS design. As a tool, the case analysis was very effective in the course. Primarily, this came in the required research undertook for the paper. In my job, I work on advanced GCS design and have input to many human factors related design decisions. One of the most interesting things I found in my research was several papers written by the same author on this topic. I have used my research to bring up discussions with coworkers on the topic, and found out some of them knew the author I was talking about. He had worked indirectly with our project several years ago and was an early impetus for some of the human factors we currently do. It also provided a framework for some of the principles we now follow, since I better understand where they came from. Some other things I have found through my research is work that has been done in an area I am just starting to work on. I have additional tasks starting next year to incorporate some new technologies into our company's products. Some of these will be difficult to do, but I was able to find some research on the exact topic that will help guide some of our decisions. Particularly, I am going to try to use some of this research to convince one of our program managers that some of his expectations are not realistic. He has the right intent, but some of his ideas are actually counter to some important human factors principles, which have been researched and proven, specifically in the domain in which we work.

Saturday, December 19, 2015

GCS Design Considerations


GCS DESIGN CONSIDERATIONS
Shawn M. Wyne
ASCI 638 – Human Factors in Unmanned Systems
Embry-Riddle Aeronautical University
Abstract
Positive control and coordination of unmanned aerial systems (UAS) is a unique task set that has not been completely explored from a human factors perspective. Aircraft cockpits have been studied and improved over decades of research. But UAS, as an emerging technology, have not had the same level of rigor applied. Is it sufficient to simply re-create a cockpit on the ground and call it complete? An aircraft cockpit, while optimized for the tasks it fulfills, also has limitations in its scope. It has size and weight restrictions. Its equipment must operate in a high-atmosphere environment that is cold, low air pressure, and possibly under sustained g-forces. UAS are not encumbered by the same physical limits of a cockpit. It follows that a renewed analysis should focus on Ground Control Station (GCS) design and development, in order to optimize the system for its intended purposes. System limitations and use differences need to be fully accounted for in any analysis. UAS are not just another aircraft, and a GCS should not be treated like just another cockpit.
Keywords: UAS, GCS design, GCS systems
Significance of Problem
Design considerations for cockpits is a well-documented area of study. The environment, the information processing and decision making aspects of a cockpit, are well known. But just because a GCS and a cockpit both control a vehicle that is airborne, does not mean they are interchangeable, or should be close to the same. A USAF Scientific Advisory Board report noted “the considerable base of human factors knowledge derived from cockpit experience may have limited applicability to future systems” (as cited in Tvaryanas, 2006). Although the Board was generalizing about the pace of technological advancement, the applicability of this concept to GCS design is particularly salient.
There are numerous standards a cockpit design must adhere to. Most are related to safety, in that any inefficiency or error in pilot action may cause an accident and loss of life. But for UAS, there is currently no equivalent set of standards for design. As an effort to assist decision making of rules for integrating UAS into national airspace, NASA commissioned a study to analyze FAA regulations and determine how they relate to human factors issues applicable to GCS. In short, the 100 page result suggests: nearly all of them (Jones, Harron, Hoffa, Lyall & Wilson 2012). This direction implies the obvious equivalent to GCS design standards is that of manned aircraft cockpits. But this is an imprecise, and inappropriate analogy. Although the rules may state so, there are many reasons they should not be the same. For hardware design, the GCS is not limited in the way that an aircraft cockpit is limited. The physical environment of an aircraft cockpit is harsher, and cockpit displays and controls are mandated to survive in that environment. Land-bound, a GCS does not have the same limitations. Power supply and computing support have their own limits, as an aircraft often sacrifices system support or redundancy for less weight. Cockpits also have limited ability to fully meet ergonomic ideals; things like ejection seats and crash survivability take priority over crew comfort. Without these limitations, GCS physical layout and design can surpass that of a manned cockpit. Some aspects of design are appropriate, as they apply to all human-machine systems, like those for visual acuity and information display arrangement (Wickens, Gordon, & Lui 1998). But the design principles for cockpits leave a number of informational avenues uncovered. Proprioceptive cues typically available to pilots physically inside an aircraft are notably absent in a GCS. Sensory input of inherent auditory and vestibular information is crucially lacking. There are no rush of wind noises to imply excessive airspeed, engine roar to verify a power setting, vibrations to judge turbulence, or g-forces to inform a decrease in total energy state. The displays in a cockpit are largely a visual medium to present aircraft system information and states. But even this is only partially inclusive of the visual information a pilot receives. Visual environmental cues allow a pilot to gather a significant amount of information that is important to safe flight simply by looking outside the cockpit. While zero visibility flight is technically possible for manned flight, it is considered the most difficult, and requires special training and certification to attempt (FAA 2015). Research has indicated significant benefit of utilizing multiple sensory input methods to increase response time and accuracy (Elliott & Stewart 2006). Other technologies, such as speech-based input, have limited applicability in a noisy cockpit, but may provide useful in a GCS. Even though the sum of information required of a pilot in a manned aircraft may be the same as that in a GCS, the systems and interface of the UAS pilot should not be the same.
In an analysis of military UAS accidents it is noted that up to 69% of all mishaps are attributed to human factors issues (Tvaryanas, Thompson, & Constable 2006, Waraich 2013). This is significant in the design of current GCS and the acknowledgement of an area of design flaws. But how this knowledge is used to dictate future design is not known. Another field which has standards is that of computer workstations. As a ground-based unit, a GCS typically has more in common with basic computer workstations similar to those that control equipment such as manufacturing facilities and heavy industry. The commercial standard is the American National Standards Institute/Human Factors and Ergonomics Society-100 (ANSI/HFES-100). These standards apply to workstations and input/output devices. Waraich (2013) also compared several current GCS designs to ANSI/HFES-100 standards and found them mostly compliant. Combining the fact that GCS are compliant with computer workstation human factors standards, and yet the majority of accidents are still human factors related, one can conclude that those standards alone are insufficient. The Department of Defense (DoD) has its own standards for design. Mil-Std-1472G dictates specific human engineering aspects of design, which is largely anthropometric (DoD 2012). Mil-Std-1787C is narrower in focus, and describes requirements of aircraft display symbology (DoD 2001). All of these standards are applicable to UAS GCS design, but none of them address UAS specific problems. Another area that should influence design standards is that which mishaps are evaluated by. The DoD utilizes a Human Factors Analysis and Classification System (HFACS) model to determine human factors influences on mishap causation. The model looks at organizational influences, unsafe supervision, preconditions for unsafe acts, and unsafe acts (Waraich 2013). The latter two aspects are those traditionally considered design issues. Although the first two categories are not normally addressed through design standards, their potential causal influence on mishaps is clear. Even more concerning, is the design of the UAS as a whole system can force organizational influences and supervision to operate in a manner that is detrimental to overall safety and efficiency. These problems are not addressed by standards that are purely anthropometric. GCS standards need to include the full realm of operation and investigate how the design influences all aspects of operation.
Another area of UAS control that is entirely unique is the quality of data received by UAS crews. By being physically separated from the aircraft, a GCS relies on some sort of datalink to receive video and information. This link creates two problems. The first is that bandwidth can be a limiting factor. The pilot’s view is already restricted to what onboard cameras can see, and the video itself may be degraded or simply poor. Resolution, color, and field of view are all impacted. Poor or missing information leads to low awareness and mistakes. Synthetic, computer-generated graphics can be developed to improve operator efficiency (Calhoun, Draper, Abernathy, Delgado & Patzek 2005). Although such a system creates new problems relating to clutter and accuracy, these issues are already addressed by existing display standards. Synthetic vision can be tool to make up for lost visual cues when the pilot is remotely controlling the aircraft. A second issue unique to GCS design are the effects of temporal distortion due to latency in datalinks. Simple control inputs can have delayed feedback which restricts the operator’s ability to assess the accuracy and effects of those inputs. Research has shown degraded performance when control is impaired by low temporal update rates or transmission delays (McCauley & Wickens 2004). This is another area of display design standards that are inadequate for GCS. Normal design principles inform that feedback lag should be avoided when possible (Wickens, et al 1998).
As a GCS is control of an aircraft, a number of philosophies exist as to what type of crew should man them. The USAF takes the position that a military trained pilot is required. The FAA, Navy, and Marines simply require basic Private Pilot certificate. The US Army, however, trains what they call simply “operators”, and are not actually pilots (McCarly & Wickens 2005). The success of military UAS suggest that neither approach is inherently incorrect. A significant finding in analyzing UAS mishaps, is that across services, there are comparable rates of mishaps resulting from judgment and decision-making errors, as well as those involving crew resource management errors (Tvaryanas, Thompson, & Constable 2005). This evidence points to the aircraft and GCS system design as the contributing source of error, and also implies the need for extensive prerequisite training is misguided. The USAF utilizes many manual control processes, and believes using manned pilots to control UAS allows the transfer of skills and knowledge to this enterprise (Cantwell 2009). But since the design of the system the pilots are placed in is significantly lacking in overall human factors design, even the thorough training they received as pilots has been insufficient to overcome those shortfalls. Even more complicating to the issue is what should comprise the crew makeup. Military UAS typically utilize two crewmembers, separating flight control from payload control tasks. While research has shown poor performance on current systems if one operator must control all the functions, other research suggests improvements to the GCS can mitigate performance loss when increasing workload (McCarley & Wickens 2004). A further complicating aspect of UAS operation compared to manned cockpits is that the crew is not likely to remain with the aircraft for the duration of a mission. Some UAS are capable of 40+ hour flights, and it is common for control of an aircraft to pass from crew to crew, or even from one GCS to another GCS.  While this migration of control is beneficial for mitigating operator fatigue, other aspects are potentially detrimental. What is not clear is the implication for display design when operators must interact with a system that is already in execution (Tvaryanas 2006). There is potential for degraded situational and system awareness and resultant impacts on performance and decision making.
Many negative human factors issues typically manifest in problems with information presentation and processing by the pilot. Divided attention is an issue when too much information is presented in a single channel (Wickens & Dixon 2002). That is, the human can only differentiate and process a finite number of cues with one method of perception. There is a significant risk of error due to cognitive saturation and tunneling (Cummings & Mitchell 2007, Dixon, Wickens & Chang 2003). Typically these cues are exclusively visual in the GCS. Because so much information is presented in a single channel, the operator is easily overwhelmed. Wickens and Dixon (2002) demonstrated benefits of utilizing multi-channel attention modes that allow humans to perceive and assimilate disparate information all at once. In this way, workload can be maintained or increased with less risk of error. Further, the limited cues existent in current GCS have negative impacts on manual control of aircraft (Elliot & Stewart 2006). The lack of cues leave the operator with inaccurate, or incomplete perceptions of aircraft states and behavior. The consequence is poor decision making and increased errors.
Automation, however, has been demonstrated to improve performance in objective measurements of UAS operation where it can limit the amount of information the pilot must process (Wickens & Dixon 2002). Automation is not necessarily the complete self-control of a system by computers. Automation exists along a scale, with many levels (Wickens, et al 1998, Elliott & Stewart 2006, Drury & Scott 2008). Although automation solves some issues of excessive workload and control limitations, new problems must be addressed. Complete automation relegates the operator to the task of passive monitoring, which humans are not particularly adept at accomplishing (Tvaryanas 2006). As an automated system makes decisions and performs actions, there is a risk the operator is not aware of these changes. As the operator becomes out-of-the-loop, there is potential for experiencing mode confusion, where the operator does not understand the state of the system or why that state exists (Wickens & Holland 2000). Once mode confusion takes place, the risk of error and degraded performance increases significantly. Additionally, the operator can experience distrust of automated systems. The effect may be increased workload and decreased performance when the operator doesn’t believe the automation will operate correctly. Alternatively, an overly trusting operator becomes complacent and suffers from automation bias, blindly trusting the system even when the consequences are negative (Cummings & Mitchell 2007). Although automation design issues are known human factors problems, their interaction with other unique aspects of GCS design is a compounding problem and thus unique to UAS.
Alternative Actions
The attempt to simply copy and continue with current design requirements is possible, and will yield a usable product, as it has for many years. Safety and efficiency will be limited, perhaps excessively so. The long term success of UAS is not probable under this paradigm. Human factors deficiencies often result in mishaps eventually, and the potentially exponential growth of UAS into the civilian and private sector make the risk untenable. Trial and error is also an insufficient design methodology for a technological sector. Growth of capabilities and emerging tasks for UAS to complete will outpace the ability of designers to iterate improvements based on experienced deficiencies.
Recommendation
Further research needs to be conducted to define what is truly necessary for GCS systems to operate effectively. It will be a summation of current HF research, combined with basic cockpit design precepts, but must include the unique environment and cognitive challenges of teleoperation of aircraft. A particular difficulty in defining appropriate standards is numerous variations of unmanned aircraft and their task requirements, as well as their automation capabilities. Drury and Scott (2008) compiled a framework of types of awareness required for UAS operation. Notably, they include the vehicle itself as one of the system components which needs information. The four categories of awareness they identify are Human-UAV, Human-Human, UAV-Human, and UAV-UAV (Drury & Scott 2008). Importantly, the system as a whole includes in the Human-Human category those individuals indirectly influencing the air vehicle described as mission stakeholders. This concept brings completeness to the design of the GCS as a whole system, and includes all the supporting elements of HFACS analysis. This framework should be the starting point for more detailed analysis of GCS systems, and a baseline for determining operator use-cases. The task-based analysis will then be able to apply the more abstract human factors principles to determine effective design strategies. An operational assessment of the MQ-9 Aircraft and Control Station rated the overall system satisfactory in terms of system usability and design, but a number of areas were rated unsatisfactory in broader terms of mission and task completion (AFOTEC 2013). This further supports the concept that design standards in their current form are insufficient to address the complicated use and unique problems facing UAS control. UAS are the culmination of decades of innovation in technology. Remote operation of a flying vehicle is an incredibly complex task, which requires more detailed study and a development of UAS specific standards.
References
Air Force Operational Test and Evaluation Center (AFOTEC). (2013). MQ-9 Reaper Increment 1 (Block 5 Aircraft/Block 30 Ground Control Station) Operational Assessment (OA)-3 Report. Kirtland Air Force Base, New Mexico: Author.
ANSI/HFES-100. (2007). Human Factors Engineering of Computer Workstations. Santa Monica, CA: Human Factors and Ergonomics Society.
Calhoun, G., Draper, M., Abernathy, M., Delgado, F., & Patzek, M. (2005). Synthetic Vision System for Improving Unmanned Aerial Vehicle Operator Situation Awareness. Proceedings of SPIE vol. 5802, 219-230. doi:10.1117/12.603421
Cantwell, H. (2009). Operators of Air Force Unmanned Aircraft Systems. Air & Space Power Journal, 23(2), 67-77.
Cummings, M., and Mitchell, P. (2007). Operator Scheduling Strategies in Supervisory Control of Multiple UAVs. MIT Humans and Automation Lab. Retrieved from: http://web.mit.edu/aeroastro/labs/halab/papers/CummingsMitchell_AST06.pdf
Department of Defense. (2012). Mil-Std 1472G: Department of Defense Design Criteria Standard, Human Engineering. Redstone Arsenal, AL: Author.
Department of Defense. (2001). Mil-Std-1787C: Department of Defense Interface Standard, Aircraft Display Symbology. Wright-Patterson AFB, Ohio: Author.
Dixon, S., Wickens, C., & Chang, D. (2003) Comparing Quantitative Model Predictions to Experimental Data in Multiple-UAV Flight Control. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 47(1), 104-108.
Drury, J., & Scott, S. (2008). Awareness in Unmanned Aerial Vehicle Operations. The International C2 Journal, 2(1), 1-10. Retrieved from: http://www.cs.uml.edu/~holly/91.550/papers/Drury_Scott_UAV-Awareness.pdf
Elliott, L., & Stewart, B. (2006). Automation and Autonomy in Unmanned Aircraft Systems. In N. Cooke, H. Pringle, & H. Pedersen (Eds.), Human Factors of Remotely Operated Vehicles, Volume 7. New York: JAI Press.
Federal Aviation Administration (FAA). (2015). Federal Aviation Regulation Part 61. Retrieved from: http://www.ecfr.gov/cgi-bin/text-idx?SID=9fbfcef04c89d713fb11ea739ad865db&mc=true&node=pt14.2.61&rgn=div5
Jones, E., Harron, G., Hoffa, B., Lyall, B., & Wilson, J. (2012). Research Project: Human factors Guidelines for Unmanned Aircraft System (UAS) Ground Control Station (GCS) Design, Year 1 Report. NASA Ames Research Center. Retrieved from: http://www.researchintegrations.com/publications/Jones_etal_2012_Human-Factors-Guidelines-for-UAS-GCS-Design_Year-1.pdf
 McCarley, J., and Wickens, C. (2004). Human Factors Concerns in UAV Flight. University of Illinois at Urbana-Champaign Institute of Aviation, Aviation Human Factors Division. Retrieved from: https://www.researchgate.net/profile/Christopher_Wickens/publication/241595724_
           HUMAN_FACTORS_CONCERNS_IN_UAV_FLIGHT/links/\00b7d53b850921e188000000.pdf
McCarley, J., and Wickens, C. (2005). Human Factors Implications of UAVs in the National Airspace. University of Illinois at Urbana-Champaign Institute of Aviation, Aviation Human Factors Division. Retrieved from: http://www.tc.faa.gov/logistics/Grants/pdf/2004/04-G-032.pdf
Tvaryanas, T. (2006). Human Factors Considerations in Migration of Unmanned Aircraft Systems (UAS) Operator Control. USAF Performance Enhancement Research Division. Retrived from: http://www.wpafb.af.mil/shared/media/document/AFD-090121-046.pdf
Tvaryanas, A., Thompson, W. & Constable, S. (2005). The U.S. Military Unmanned Aerial Vehicle (UAV) Experience: Evidence-Based Human Systems Integration Lessons Learned. Retrieved from http://www.rto.nato.int/abstracts.asp
Tvaryanas, A., Thompson, W. & Constable, S. (2006). Human Factors in Remotely Piloted Aircraft Operations: HFACS Analysis of 221 Mishaps Over 10 Years. Aviation, Space, and Environmental Medicine, 77(7), 724-732.
Waraich, Q. (2013). Heterogeneous Design Approach for Ground Control Stations to Marginalize Human Factors Mishaps in Unmanned Aircraft Systems (Doctoral Dissertation). Retrieved from: https://www.sintef.no/globalassets/project/hfc/documents/waraich-qaisar-ehf-in-uas.pdf
Wickens, C. & Dixon, S. (2002). Workload Demands of Remotely Piloted Vehicle Supervision and Control. Savoy, IL: University of Illinois, Aviation Research Lab.
Wickens, C, Gordon, S, & Liu, Y. (1998). An Introduction to Human Factors Engineering. New York: Addison Wesley Longman, Inc.
Wickens, C., & Holland, J. (2000). Engineering Psychology and Human Performance, 3rd ed. New Jersey, Prentice Hall.
 
 
 
 

Sunday, December 6, 2015

Automatic Take-off and Landing


Automatic Takeoff and Landing
Shawn M. Wyne
ASCI 638 – Human Factors in Unmanned Systems
Embry-Riddle Aeronautical University
Abstract
        Airplanes require a host of skills for pilots to operate them safely. When airborne during a cruise phase, there is larger margin of error than compared to flight near terrain. By obvious necessity, aircraft must go near the ground at a minimum for takeoff and landing. It is these regimes that are the most dangerous, and certainly many mishaps that otherwise could be recovered from cause an accident. As all aspects of flight, anything that has the potential for accidents has been attempted to alleviate through the use of technology. Manned and unmanned aerospace systems (UAS) both benefit from the use of automatic takeoff and landing systems.
        Keywords: UAS, autoland, automatic takeoff and landing

Auto Takeoff and Landing
        Automatic takeoff and landing systems are employed on a number of aircraft. The reason for this is very simple. Aircraft have lots of accidents during the landing and takeoff phases of flight. Often these accidents are the result of poor flying conditions. Low visibility is a significantly difficult situation for a pilot to land in, as there is minimal visual reference to align the aircraft to the runway. A number of instruments have been developed to provide information to a pilot for this assistance. The localizer, and instrument landing systems have been around for decades. But even their accuracy is limited, and bad weather may preclude even their use. The FAA defines weather minimums to allow for particular systems to land. The worst allowable landing weather is called Category IIIB, which means a decision height less than 50ft and runway visibility between 150-700 feet (FAA 1999). These conditions are such than an airliner may not actually see the runway until as few as five seconds before touchdown. Category III weather minima basically dictates that an automatic landing system be used; manual landing is only approved for Category II or better weather. Another reason for using automatic takeoff or landing procedures is for use in UAS. Datalink latency may make manual control of the aircraft dangerous, or the control station may not even have the mechanical controls allowing manual flight at all. All of these scenarios provide a situation where an autopilot may be preferred, or required, to use.
        One aircraft that uses a variety of automatic systems is the Boeing 737-600 series and higher. The 737 actually utilizes a system capable of three different automatic approach and landing levels. A publication by Boeing by Craig, Houck, and Shomber (2003) describes the different capabilities of the aircraft. The actual autoland is the only one certified for Category IIIB operations. It works by utilizing the existing instrumentation, and details from the aircraft Flight Management System (FMS) like runway width and length, to manage aircraft parameters like pitch, engines, and brakes as appropriate. Once set up by the pilot, it is a hands-off system as long as it is working. The aircraft maintains course and decent path all the way through the flare and touchdown, applies brakes and spoilers and brings the aircraft to a complete stop. Use of reverse thrusters remains a pilot controlled item. Since it is a fail-safe design (as required by regulation), a failed system automatically reverts to manual flight. The pilot can also revert to manual flight if they choose. A second method of landing is with a GPS based landing system. This system relies on both GPS signals, and a special ground based system to provide positional information to the aircraft. This also can be reverted to manual flight, and is only approved for Category I approaches with a 200 foot decision height as of publication date. But the automatic landing is still available. There is also a third automatic approach system, but it is only for non-precision approaches and requires pilot control of altitude control and a manual decent and landing.
        Another aircraft that utilizes automatic takeoff and landing capability is the General Atomics MQ-1C Gray Eagle. This aircraft utilizes automatic systems for the primary reason that the Army does not use pilots to control them. The operators are primarily sensor managers and control aircraft position by manipulating the autopilots. The takeoff and landing capability was originally enabled by use of the Tactical automatic Landing System (TALS). This is a system of ground based radars and aircraft systems that orient the aircraft’s location in relation to the runway (“TALS” n.d.). However, as that system relies on a ground element to be precisely installed and aligned, overall it was impractical for military use. The system was converted to an Automatic Takeoff and Landing System (ATLS) that is wholly onboard the aircraft. This system is strictly GPS based and therefore is more flexible. Although the system would not be at all approved for Category IIIB type approaches, the military considers it irrelevant as there are no personnel at risk in a low visibility landing, just equipment. As of 2013, the new system had surpassed 20,000 safe landings (Pocock 2013).
        The success of autoland systems shows their viability in both manned and unmanned systems. However, complete reliance on such technology is not appropriate. A number of conditions or situations, both internal and external, make the autoland features unusable. Therefore, there still should be a backup method for manual landing. This is a problem for training and keeping skills up for pilots. Complacency is always an issue for automated systems, and it must be designed with human interaction in mind.
References
        Craig, R., Houck, D., and Shomber, R. (2003). 737 Approach Navigation Options. Boeing Aero Magazine. Retrieved from: http://www.boeing.com/commercial/aeromagazine/aero_22/737approach.pdf.
        FAA (1999, July 13). Advisory Circular 120-28D Criteria for Approval of Category III Weather Minima For Takeoff, Landing, and Rollout. US Department of Transportation Federal Aviation Administration.
        Pocock, C. (2013, October 25). Improved Gray Eagle UAS Flies for 45 Hours. Retrieved from: http://www.suasnews.com/2013/10/25700/gray-eagle-completes-20000-automated-takeoffs-landings/
         TALS Tactical Automatic Landing System. (n.d.). Sierra Nevada Corporation. Retrieved from http://sncorp.com/Pdfs/BusinessAreas/TALS%20Product%20Sheet.pdf