SM Agent Technology For Human Operator Modelling

Similar documents
UNCLASSIFIED. R-1 ITEM NOMENCLATURE PE D8Z: Central Test and Evaluation Investment Program (CTEIP) FY 2011 Total Estimate. FY 2011 OCO Estimate

UNCLASSIFIED. R-1 ITEM NOMENCLATURE PE D8Z: Central Test and Evaluation Investment Program (CTEIP) FY 2012 OCO

SPS-TA THALES AIRBORNE SYSTEMS INTEGRATED SELF-PROTECTION SYSTEM FOR TRANSPORT AND WIDE-BODY AIRCRAFT.

C4I System Solutions.

FFG UPGRADE Brochure Delivering tag integrated line warfare solutions.

Detect, Deny, Disrupt, Degrade and Evade Lethal Threats. Advanced Survivability Suite Solutions for Mission Success

Comprehensive 360 Situational Awareness for the Crew Served Weapons Leader

UNCLASSIFIED R-1 ITEM NOMENCLATURE FY 2013 OCO

MDTS 5705 : Guidance Lecture 1 : Guidance System Requirements. Gerard Leng, MDTS, NUS

Military Radar Applications

UNCLASSIFIED. UNCLASSIFIED Army Page 1 of 7 R-1 Line #9

Keywords. Guided missiles, Classification of guided missiles, Subsystems of guided missiles

aselsan EW SPECTRUM MANAGEMENT

A METHOD OF RISK ANALYSIS AND THREAT MANAGEMENT USING ANALYTIC HIERARCHY PROCESS: AN APPLICATION TO AIR DEFENSE

Team 3: Communication Aspects In Urban Operations

KEY NOTE ADRESS AT ASSOCIATION OF OLD CROWS

Trusted Partner in guided weapons

SSC Pacific is making its mark as

UNCLASSIFIED. R-1 ITEM NOMENCLATURE PE D8Z: Central Test and Evaluation Investment Program (CTEIP) FY 2013 OCO

LESSON 2 INTELLIGENCE PREPARATION OF THE BATTLEFIELD OVERVIEW

Air Defense System Solutions.

UNCLASSIFIED FY 2016 OCO. FY 2016 Base

United States Army Special Operations Aviation Command (USASOAC)

WEAPONS DEVELOPMENT AND INTEGRATION DIRECTORATE OVERVIEW SPACE AND MISSILE DEFENSE WORKING GROUP 22 SEPTEMBER 2016

Missile Mathematical Model and System Design

The Concept of C2 Communication and Information Support

Exhibit R-2, RDT&E Budget Item Justification

AMRDEC. Core Technical Competencies (CTC)

Engineering, Operations & Technology Phantom Works. Mark A. Rivera. Huntington Beach, CA Boeing Phantom Works, SD&A

M-346 ITS ETTS and LVC filling the gap towards New Generation Combat Aircraft Training

UNCLASSIFIED FY 2009 RDT&E,N BUDGET ITEM JUSTIFICATION SHEET DATE: February 2008 Exhibit R-2

Reconsidering the Relevancy of Air Power German Air Force Development

MEMORANDUM. BASE OPS/ International Spy Museum. Operation Minute by Minute. 01 October, 1962 (time travel skills required)

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R-2 Exhibit)

UNCLASSIFIED FY 2008/2009 RDT&E,N BUDGET ITEM JUSTIFICATION SHEET DATE: February 2007 Exhibit R-2

The Integral TNO Approach to NAVY R&D

UNCLASSIFIED FY 2016 OCO. FY 2016 Base

Math 120 Winter Recitation Handout 4: Introduction to Related Rates

Naval Electronic Warfare Solutions Ensuring your mission success.

AGI Technology for EW and AD Dominance

ISR Full Crew Mission Simulator. Intelligence, Surveillance and Reconnaissance Capabilities for Airborne and Maritime Live Mission Training

GOOD MORNING I D LIKE TO UNDERSCORE THREE OF ITS KEY POINTS:

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R-2 Exhibit)

THE C-OODA: A COGNITIVE VERSION OF THE OODA LOOP TO REPRESENT C 2 ACTIVITIES. Topic: C 2 process modelling. Richard Breton & Robert Rousseau

The Four-Element Framework: An Integrated Test and Evaluation Strategy

UNCLASSIFIED. FY 2016 Base FY 2016 OCO

Theater Ballistic Missile Defense Analyses

Active Stabilization of Firearms by Optical Target Tracking

Joint Warfare System (JWARS)

Salvo Model for Anti-Surface Warfare Study

Fundamentals of Electro-Optics and Infrared Sensors

mm*. «Stag GAO BALLISTIC MISSILE DEFENSE Information on Theater High Altitude Area Defense (THAAD) and Other Theater Missile Defense Systems 1150%

Arms Control Today. U.S. Missile Defense Programs at a Glance

2017 Annual Missile Defense Small Business Programs Conference

RDT&E BUDGET ITEM JUSTIFICATION SHEET (R-2 Exhibit)

Allied Forces discovered a small terrorist base in a valley on Georgia territory in close proximity to Russian and South Ossetian borders.

The Verification for Mission Planning System

Cognitive Triangle. Dec The Overall classification of this Briefing is UNCLASSIFIED

New Artillery Sunday Punch

10 th INTERNATIONAL COMMAND AND CONTROL RESEARCH AND TECHNOLOGY SYMPOSIUM THE FUTURE OF C2

The main tasks and joint force application of the Hungarian Air Force

Appendix C. Air Base Ground Defense Planning Checklist

UNCLASSIFIED R-1 ITEM NOMENCLATURE

Understanding Army s UAS Requirements Through Modelling And Simulation

Analysis of Interface and Screen for Ground Control System

Rapid Development and Integration of Remote Weapon Systems to Meet Operational Requirements Abstract October 2009

UNCLASSIFIED. Cost To Complete Total Program Element : DIGITAL BATTLEFLD COMM.

UNCLASSIFIED R-1 ITEM NOMENCLATURE FY 2013 OCO

UNCLASSIFIED. FY 2016 Base FY 2016 OCO

Introduction to missiles

Commentary to the HPCR Manual on International Law Applicable to Air and Missile Warfare

ARCHIVED REPORT. For data and forecasts on current programs please visit or call

Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems

ARMY TACTICAL MISSILE SYSTEM (ATACMS) BLOCK II

Methodology The assessment portion of the Index of U.S.

Request for Solutions: Distributed Live Virtual Constructive (dlvc) Prototype

Chapter 13 Air and Missile Defense THE AIR THREAT AND JOINT SYNERGY

MQM-171 BROADSWORD IN SUPPORT OF TEST MISSIONS

UNCLASSIFIED FY 2016 OCO. FY 2016 Base

Kill Vehicle Work Breakdown Structure

Fire Support Systems.

ROUTE CLEARANCE FM APPENDIX F

Low Altitude Air Defense (LAAD) Gunner's Handbook

The APL Coordinated Engagement Simulation (ACES)

USASMDC/ARSTRAT & JFCC IMD Update. Space and Missile Defense Capabilities for the Warfighter

Subj: ELECTRONIC WARFARE DATA AND REPROGRAMMABLE LIBRARY SUPPORT PROGRAM

U.S. Army Training and Doctrine Command (TRADOC) Analysis Center (TRAC)

UNCLASSIFIED. UNCLASSIFIED Army Page 1 of 16 R-1 Line #45

USAF Gunship Precision Engagement Operations: Special Operations in the Kill Chain

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R-2 Exhibit)

Anti-Ship Missile Defense

UNCLASSIFIED. FY 2016 Base FY 2016 OCO

Predictive Battlespace Awareness: Linking Intelligence, Surveillance and Reconnaissance Operations to Effects Based Operations

Defense Technical Information Center Compilation Part Notice

Russian defense industrial complex s possibilities for development of advanced BMD weapon systems

UNCLASSIFIED. FY 2017 Base FY 2017 OCO

The Patriot Missile Failure

Challenges in Vertical Collaboration Among Warfighters for Missile Defense C2

18. WARHEADS AND GUIDANCE SYSTEMS

Training and Evaluation Outline Report

Transcription:

SM Agent Technology For Human Operator Modelling Mario Selvestrel 1 ; Evan Harris 1 ; Gokhan Ibal 2 1 KESEM International Mario.Selvestrel@kesem.com.au; Evan.Harris@kesem.com.au 2 Air Operations Division, DSTO Gokhan.Ibal@defence.gov.au Abstract. When specifying and creating a model of a human operator, it is a desirable objective that the model should reason in a way that is accepted as intuitive by analysts, domain experts, operators and lay people. The OODA loop is a widely known and accepted model within the military domain that characterises military decision making as a four part looping process. The four parts of the OODA loop are: Observe, Orient, Decide and Act. The Air Operations Division (AOD) of DSTO has mapped the OODA loop on to a four-box model as: (Observe), (Orient), Selection (Decide) and Execution (Act). In this paper we present SM Agent Technology, a software architecture for human operator models that implements this four-box model as a collection of Finite State Machines. In this architecture, each stage in the four-box model is represented by one or more Finite State Machines. Furthermore, the human operator s static knowledge, which does not vary in time, and dynamic knowledge, that does vary in time, is explicitly represented using Belief Desires Intention (BDI) concepts. By constructing the SM Agent architecture of the model along these lines, the complexity of the overall model is reduced, facilitating knowledge transfer, understanding and discussion between analysts, domain experts, operators and software engineers. A model of an Armed Reconnaissance Helicopter (ARH) Crew has been developed using SM Agents and integrated into the BattleModel simulation environment where it has been successfully used in conducting studies. 1. INTRODUCTION When defining an operator model it is desirable that the model should make decisions in a way that is accepted as intuitive by the operators that are being modelled and other lay-people who may use the model. In this paper, we present the SM Agent architecture, a software architecture that has been designed to model the processes that operators go through when making decisions. The decision making process is modelled as an implementation of the OODA loop that was designed to characterise military decision making as a four part continually looping process, namely: Observe, Orient, Decide and Act [4]. Furthermore, the human operator s static knowledge, which does not vary in time, and dynamic knowledge, that does vary in time, is explicitly represented using Belief Desires Intentions (BDI) concepts [2]. The OODA framework was originally described by John Boyd and has been both well received and widely adopted within military circles. More recently it has emerged in business, particularly in competitive and hostile environments, where it is used to characterise and explain the operation, not just of decision makers, but also of the enterprise as a whole. As a concrete example of using the SM Agent architecture, we present an overview of the model of an Armed Reconnaissance Helicopter (ARH) Crew developed using SM Agents, integrated into the BattleModel simulation environment at the Air Operations Division (AOD) of DSTO and successfully used in Operations Analysis studies for the evaluation of helicopter evasion tactics. [3]. Section 2 presents an overview of the OODA Framework. Section 3 presents an overview of the SM Agent architecture that implements the OODA Framework to model human operators. The final section presents our conclusions. 2. THE OODA FRAMEWORK In the military domain the OODA loop maps [4] onto the concepts of:,, Selection and Execution. Within DSTO AOD this process is also referred to as the four-box model. We will use this terminology in this paper, as it is most familiar to the operators that provide input to the Operations Analysis studies that the operator models are used for within DSTO AOD. The four components of the OODA loop are sufficiently generic to span the breadth of the decisionmaking currently required for ARH simulations while being clear enough that operators are comfortable with the architecture and modelling processes. The SM Agent implementation of the OODA architecture provides a balance between allowing the flexibility to capture the depth and complexity of operator decision-making by analysts and the simplicity and modularity required to keep the SM Agent models understandable by operators and easy to create and maintain by software engineers.

The remainder of this section provides an overview of each of the four components of the OODA loop and its mapping onto the four-box model using the ARH Crew behaviour as a motivating example. Observe Orient Decide Act Selection Execution Figure 1: The OODA Process 2.1 Observe ( ) is the process of observing the environment, and developing and maintaining a consistent set of beliefs of it by the operator. The ARH Crew observe their environment through sensors and communication systems. Sensors include electro-optical systems and Missile Warning Receivers, and unaided means such as eyesight. Communications systems include radio and data link. Some observations trigger sub-cognitive functions or automatic reactions, such as obstacle avoidance in Nap Of the Earth (NOE) flying. These functions require little conscious decision-making on part of the operator and are controlled by the reflex skill or hand-eye coordination skill of the operator. Some observations trigger situation awareness functions, such as determining if the contact is a potential threat. These functions require knowledge and reasoning about the environment in which the operator is emersed. For example, the ARH crew has knowledge about the potential range at which a missile can be fired at the ARH, thus allowing the crew to reason about the threat projected by the missile. At the conclusion of the situation is known and described, but is independent of any interpretations. In BDI terms the operator model has generated Beliefs about the environment in which it is immersed. For example, the sensor inputs enables the crew to develop a mental picture of where the known contacts are geometrically, but no meaning is attached to what the position of the contacts means for the crew. 2.2 Orient ( ) is the process of determining what the current beliefs about the environment means to the observer (operator). A change in only occurs when the observed environment changes. For example, do the latest changes resulting in the current al perceived world indicate that the observer is under threat? is based on: dynamic knowledge, the beliefs about the current situation; static knowledge, beliefs that do not change over time; and the operators desires or goals. For example, an assessment might be that a contact has radar lock on me and is about to fire a surface-to-air missile (SAM) at me. That a contact has a radar lock on me is situation awareness derived from observed sensor data. That it is about to fire a SAM at me is an internal, static belief that a missile firing typically follows a radar lock. The output of is a set of BDI Beliefs that assess the situation and will help to categorise the types of tactics considered. For example, whether offensive or defensive tactics will be considered. 2.3 Decide ( Selection) Selection is the process of choosing the most appropriate tactic, or Standard Operating Procedure, for the situation as currently assessed. A change in or Standard Operating Procedures selection only occurs if an assessment of the situation changes. For example, has the observer s posture changed from offensive to defensive? A tactic is selected based on the situation and is conditioned by the assessment. For example, the contact has radar lock on me ( ) and I need to be defensive because the threat is about to fire a SAM at me ( ). Based on these beliefs, I must evade by flying to the nearest masking point that hides me from the SAM launcher ( Selection). The tactic selected is based on a series of plans. Only the and are required to choose a particular tactic to be executed. 2.4 Act ( Execution) Execution is the process of performing actions associated with a plan over time. and Standard Operating Procedures are based on plans and parameters that determine how the plans are implemented. For example, sensor ranges and offensive weapon ranges determine suitable locations from where to launch an attack. The Execution results in actions controlling elements of the physical environment. For example, the

pilot controlling the helicopter speed and altitude; the crew controlling their individual sensors; the weapons officer selecting, firing, and guiding the weapons; the crew communicating with other operators through radio calls or data link transmissions. 3. THE SM AGENT OPERATOR MODEL The primary components of the SM Agent Human Operator Model are an implementation of each of the four components of the OODA Framework discussed in Section 2, and a model of the knowledge of the operator. These components are illustrated in Figure 2. Human Operator Model Operator Knowledge Static Knowledge: 1. Pre briefed Mission Data 1. Waypoints 2. Primary Target 3. 2. Weapon Performance 3. Threat Performance 4. Dynamic Knowledge: 1. Own state 2. Perceived World 3. Observations: 1. Own state 2. Sensor data 3. Selection Perceived World Set Objective Set Tactic Set 3.1.2 Dynamic Operator Knowledge Dynamic Operator Knowledge consists of information that is continuously updated as the operator gains data from and about the environment. Dynamic operator knowledge consists of physical information available to the operator including: the state of the operator s own equipment such as the platform s position and orientation; and data provided to the operator through the sensors such as the location and type of contacts. Dynamic operator knowledge also considers all aspects of the OODA loop that can change over time including:, such as where the operators believe contacts are located; such as the operators posture of offensive or defensive; Selection, such as which tactic is currently being executed; and Execution, such as what is the current phase of the tactic being executed. 3.2 OODA Loop In the SM Agent implementation of the OODA Framework each component is implemented by one or more Finite State Machines (FSMs). Figure 3 illustrates the relationship between the OODA loop and SM Agent functionality, where components of the ARH crew operator model are horizontally aligned with the aspect of the OODA loop they correspond to. Execution OODA SM Agent Operator Model Actions: 1. Platform 2. Sensors 3. Weapons 4. Figure 2: Human Operator Model Replan 3.1 Operator Knowledge or Beliefs Decisions made in the OODA Framework are governed by the operators knowledge. The operator knowledge consists of Static Operator Knowledge, which does not vary with time, and Dynamic Operator Knowledge, which varies with time. 3.1.1 Static Operator Knowledge Static Operator Knowledge consists of information available to the operator that does not vary with time. Examples of static knowledge includes: mission specific information, such as the primary objective of the mission; knowledge about the performance of the operator s own equipment, such as the platform, sensors, and weapons; and knowledge about the threats that can be projected by the enemy. Selection Execution PGM Attack Offensive Defensive Neutral Rocket Attack Attack IR SAM Evade Evade Fly Waypoint Figure 3: Decomposition of four box model The remainder of this section discusses each phase of the OODA loop mapped into the four-box model in the context of the ARH crew operator model. 3.2.1 The primary role of is to build a uniform perception of the world based on the operators observations of the environment. updates the operators dynamic operator knowledge. For example, if information is received from more than one sensor combines it to create

one uniform perceived world. If the sensors are off, the perceived world can be significantly different. Figure 4 contains an example of a corresponding situation awareness state diagram, where different states for are considered depending on whether sensors are on or off. It has been simplified for clarity. The entry action for each state results in the creation of a corresponding (SA) FSM and the termination of the previously active FSM, and the Selection and Execution FSMs that it created. Sensors On entry/ create Sensors On SA FSM land Execution FSMs are terminated and a new Selection FSM based on the new posture is created. Another role of is to perform a Replan on the completion of Execution. Replanning may also be performed on a regular basis to determine if the current tactic being executed is still suitable. For example, if current tactic has been active for over 30 seconds perform a replan. This approach to modelling maps well onto a Finite State Machine implementation, where the primary states are Neutral, Defensive, and Offensive, with state changes being governed by the operator s static and dynamic knowledge. Figure 5 contains a simplified example of the situation assessment state diagram used by the ARH crew operator model. take off Sensors Off entry/ create Sensors Off SA FSM Offensive Posture entry/ create Offensive FSM Figure 4: State Diagram occurs at every simulation time step. is complete when all available data from the environment is processed and the dynamic operator knowledge is updated. for the operator commences once is complete using the currently active FSM. 3.2.2 The primary role of is to analyse the perceived world generated by and determine its implication for the operator. At a high level for the ARH crew operator model is responsible for determining if the operator s posture is Offensive, Defensive, or Neutral. The SM Agent model uses a series of plans to determine what the appropriate posture is for the given situation. For example, if a new contact is the primary target according to the pre mission briefing, the operator is to become offensive against it. If a contact projects a threat against the operator, the operator is to become defensive. If there are no contacts the operator is neutral and follows the pre mission briefed waypoints. On completion of if the posture (state) of the operator does not change the existing Selection and Execution processes will continue using the currently active Selection and Execution FSMs. On completion of if the posture of the operator changes the current Selection and primary target detected Neutral Posture entry/ create Neutral FSM threat removed primary target destroyed Defensive Posture threat projected entry/ create Defensive FSM Figure 5: State Diagram 3.2.3 Selection Selection selects the appropriate tactic for the operator to execute according to the operators posture (determined through ) and the perceived environment (determined through ). The ARH crew SM Agent model uses plans with the three operator postures (Neutral, Defensive, and Offensive) as the first level of filtering to determine an appropriate set of tactics to select from. Offensive serve to manage attacks against defined targets. The tactic chosen considers the operator s static and dynamic knowledge. For example, if the target to be attacked is hard, such as a bunker, a guided missile attack is appropriate. If the target to be

attacked is soft, such as exposed troops, a rocket attack is appropriate. Defensive serve to protect the operator against defined threats. The tactic chosen considers the operators static and dynamic knowledge. For example, the appropriate defensive tactic to employ varies with the type of threat, such as a gun or missile, and where the threat is located, either near or far. Neutral manage the operator s actions in absence of targets or threats. Typically it provides default behaviour for the operator such as flying pre mission briefed flight paths. This approach to modelling Selection maps well onto a Finite State Machine implementation where individual tactics are Tactic Selection states, with state changes being governed by the operator s static and dynamic knowledge. Figure 6 contains a simplified example of the Offensive Selection state diagram used by the ARH crew operator model. operator could have arrived at this tactic by: detecting a bunker at a mission briefed location ( ); determining the bunker is the primary target and the ARH is to be offensive against it ( ); determining the bunker is a hard target which needs a guided missile to destroy it ( Selection). The Guided Missile Attack Tactic consists of the following steps as illustrated in Figure 7. First, plan a path towards the target and fly the path (1) until it reaches 80% of its maximum weapons range (2). Next, pop up (ascend) to the missile firing altitude (3), then prepare and fire the missile once it is ready (4). After firing, guide the missile until it hits the target (5). Finally, descend (pop down) to a safe escape altitude (6). The described in this form map well onto a Finite State Machine. Figure 8 describes the Guided Missile Attack as Finite State Machine. Rocket Attack entry/ create Rocket Attack FSM soft target detected 1 2..6 target Select Tactic 80% R max hard target detected Guided Missile Attack 3..5 entry/ create Guided Missile Attack FSM 1 2, 6 target Figure 6: Offensive Selection State Diagram As for and, the entry action for each state results in the creation of a corresponding Execution FSM and the termination of the previously active Selection FSM, and the Execution FSM that it created. Figure 7: Guided Missile Attack 3.2.4 Execution or Standard Operating Procedures are defined by plans, which consist of a series of subtasks with well-defined events resulting in transitions through the subtasks. The Execution process manages the steps required to implement a particular tactic. Execution ultimately results in commands being issued by the operator models to physical models. For example: flying the platform; controlling sensors; controlling weapons; etc. By way of example, we present the details of the Execution for a guided missile attack. The

Fly Waypoint reached 80% max weapon range Pop Up reached firing altitude Fire weapon released Guide Mi ssile weapon hit Pop Down reached safe altitude time the operator has spent executing the tactic may all be maintained as dynamic knowledge. On completion of the Execution, for example: when the platform has completed the pop down manoeuvre of the guided missile attack; or the last waypoint has been reached; the current Tactic is terminated. This is achieved by a change in state of a Finite State Machine at a higher level (either,, or Selection), or as a result in the change in dynamic operator knowledge. 4. CONCLUSION By constructing human operator models using the SM Agent architecture as presented in this paper, the complexity of the overall model is reduced, facilitating knowledge transfer, understanding and discussion between analysts, domain experts, operators and software engineers. A model of an Armed Reconnaissance Helicopter (ARH) Crew has been developed using SM Agents and integrated into the BattleModel simulation environment where it has been successfully used in conducting studies. Figure 8: Guided Missile Attack State Diagram The architecture of the SM Agent technology also supports the sharing of tactical subtasks amongst many tactics. For example, the capability of an operator to fly a series of waypoints is a common task that forms a critical part of many tactics. In the above example the initial phase of the attack is flying waypoints towards a firing position. The initial default behaviour of an operator is typically to fly a mission briefed set of waypoints. Similarly, an evasion tactic may require the operator to follow waypoints to avoid the threat. Figure 9 contains an example of a Finite State Machine representation of the plan to fly waypoints. waypoint reached[ more waypoints ] / next waypoint REFERENCES 1. Heinze, C. (2003) Modelling Teams and Organisations with Intelligent Agents, Proceedings of the Second Workshop on Computer Generated Forces and Behaviour Representation, May. 2. Heinze, C.; Smith, B.; Cross, M. (1998) Thinking Quickly: Agents for Modeling Air Warfare, Proceedings of Australian Joint Conference on Artificial Intelligence AI '98, Brisbane, Australia. 3. Ibal G.; Selvestrel M.; et. al. (2004) Air Operational Research in Support of Helicopter Defensive Tactic Development Proceedings of the Ninth International Conference on Simulation Technology and Training, May. 4. Tidhar G.; Heinze C.; Goss S.; Murray G.; Appla D.; Lloyd I. (1999) Using Agents Intelligently, Proceedings of Eleventh Innovative Applications of Artificial Intelligence Conference. Fly Waypoint waypoint reached[ no more waypoints ] Figure 9: Waypoint Flying State Diagram At each simulation time step Execution may update the Dynamic Operator Knowledge. For example, the current waypoint for the platform, the remaining time of flight for a guided weapon, and the