Defining autonomy is a very difficult problem. I think autonomy is context dependent, but really one of the big things we look at when we talk about autonomy, we’re talking about decisions. We’re talking about decisions in response to a changing environment. And depending on the context of the problem we’re trying to solve, autonomy is going to look a little bit different. And I think as we look at this autonomy framework we’re creating, it takes advantage of this fact that autonomy is not necessarily a cut and dry definition. How can we apply machine learning algorithms to unmanned aerial systems? How does the operator interact with that in a human-machine teaming aspect? And then from there, how do we take and coordinate all of this action when we’re talking about a whole bunch of multi-agent systems? When we’re interacting, how does this system as a whole work together? We’re hoping this autonomy framework will help us to do some of that in the future. Here within AFIT, we’re definitely focused on the military aspects of autonomy. What things can we do to make autonomous systems useful for our warfighters and make them useful tools? Specifically related to autonomy, we’ve got a student working on trying to do an instantiation of our autonomy framework with a single UAV—with a multirotor UAV, and so we’re pretty excited about where he’s able to show some implementation of some behavioral controllers to get a UAV to navigate around some obstacles and path plan around. It’s a more simplified instantiation, but the exciting thing is that we’ll be able to take that effort and build upon it to do more interesting autonomous behavior type work with UAVs. It’s important in autonomy, especially in the human-machine teaming arena, for the machine to understand what’s going on with the human. Especially when there’s going to be interactions where the human and the machine are working together. So for the machine to understand what’s happening with the human, one of the components is to use machine learning techniques to figure out what state the human is in. AFIT has a large footprint in machine learning and right now one of the key areas that we’re looking at is human state assessment. So we put sensors on people and try to determine using machine learning what their state is. For example, what kind of workload they’re experiencing currently or perhaps whether they’re confident in the decisions they’re making. We have a whole host of work going on with cooperative behavior and control. Everything from formation flying to research on cooperative wide area search and engagement. This is all about making our operators more efficient, more effective in what they do. Sometimes it’s replacing the forward deployed person in the dull, dirty, dangerous mission. Sometimes it’s just a matter of making them more effective by having a single operator able to operate multiple systems. This is all about trying to achieve the objectives that the DoD is handed by our leaders, and do that with as low a human cost as possible and a low dollar cost. So if we can make them more effective by building autonomy into the system and furthering that by making these autonomous systems capable of cooperating with each other…if we can relieve some of that operator burden, make them more effective, that’s what we’re after.