DARPA Geometries of Learning (GoL) DARPA-PA-21-04

Due Date: Feb 03, 2022 04:00 pm EST Government Organization: Defense Advanced Research Projects Agency (DARPA) Description: On January 5, the Defense Advanced Research Projects Agency (DARPA) issued an Artificial Intelligence Exploration (AIE) Opportunity inviting submissions of innovative basic or applied research concepts in the technical domain of high-dimensional geometric analysis of deep neural networks (DNNs) applied to image data. Responses are due by 4:00 p.m. Eastern on February 3. The Geometries of Learning (GoL) AIE Opportunity aims to advance the theory of Deep Learning (DL) by better understanding the geometry of natural images in image space, the geometry of functions that map from image space to feature and label space, and the geometry of how these mapping functions evolve during training.

Category: Opportunity

DoD Communities Of Interest: Artificial Intelligence

Subject: DARPA Geometries of Learning (GoL) DARPA-PA-21-04

Due Date: Feb 03, 2022 04:00 pm EST

Government Organization: Defense Advanced Research Projects Agency (DARPA)

Description:

On January 5, the Defense Advanced Research Projects Agency (DARPA) issued an Artificial Intelligence Exploration (AIE) Opportunity inviting submissions of innovative basic or applied research concepts in the technical domain of high-dimensional geometric analysis of deep neural networks (DNNs) applied to image data. Responses are due by 4:00 p.m. Eastern on February 3.

The Geometries of Learning (GoL) AIE Opportunity aims to advance the theory of Deep Learning (DL) by better understanding the geometry of natural images in image space, the geometry of functions that map from image space to feature and label space, and the geometry of how these mapping functions evolve during training.

This AIE Opportunity is issued under the Program Announcement for AIE, DARPA-PA-21-04. For the prototype project, all awards will be made in an Other Transaction (OT). The total award value for the combined Phase 1 base (Feasibility Study) and Phase 2 option (Proof of Concept) is limited to $1,000,000. This total award value includes Government funding and performer cost-share, if required or if proposed.

The Department of Defense is committed to rapidly adopting machine learning technology. Unfortunately, the practice of DL has outpaced the theory, creating barriers to adoption. The underlying hypothesis is that the set of images of an object form a manifold in image space, which can be used to restrict the mapping from image space to label space and improve the training process for DNNs. A better understanding of the geometries of learning is expected to yield practical insights into many difficult problems. Examples include:

  • Adversarial AI: Adversarial examples are images or scenes that an adversary has modified to fool deep networks. One hypothesis about why deep networks are vulnerable to adversarial attacks is that they are forced to map all image space to label space, even though most images contain random noise. Understanding the manifolds natural images form in image space can potentially restrict the function’s domain, making neural nets less susceptible to adversarial attacks.
  • Explainable AI: Human operators need to understand the basis for the output of a DNN, beyond the label the network assigns to an image (see DARPA Explainable Artificial Intelligence (XAI) program at https://www.darpa.mil/program/explainableartificial-intelligence). If we understand the manifold created by the images of an object, then the position of an image on the manifold conveys additional information, such as the pose of the object or how the scene is illuminated. Instead of producing just a label (e.g., “cat”), deep networks might provide descriptions (e.g., “cat, laying down, viewed from the side”).
  • Trainable AI: A barrier to machine learning in many applications is a lack of large enough training sets. Once again, if we understand the geometry of image manifolds, we can determine whether we have enough samples to model an object, and if not, what samples we need to add.
  • Trustworthy AI: Operators need to know when to trust machine learning systems and when not to. One reason to distrust a deep network is the stability of the learning process. If the insertion or removal of a few natural samples from the training set significantly impacts the decisions of the net, then those decisions may not be reliable.

GoL will exploit the classic observation that naturally occurring images lie on manifolds occupying only a tiny portion of image space2, creating the opportunity for modern analysis approaches to be restricted to these manifolds. If successful, this will improve our theoretical understanding of adversarial AI, explainable AI, trainable AI, and trustworthy AI.

Website: https://sam.gov/opp/761a5bea003a437e9001fa7c26febf2b/view

Questions or assistance, contact:
North Carolina Defense Technology Transition Office (DEFTECH)

 

Dennis Lewis
lewisd@ncmbc.us
703-217-3127

Bob Burton
burtonr@ncmbc.us
910-824-9609

North Carolina Defense Technology Transition Office | PO Box 1748, Fayetteville, NC 28303