Behind the scenes, the role of artificial intelligence (AI) in the US military…




Behind the scenes, the role of artificial intelligence (AI) in the US military is growing quickly

Writing a maintenance plan for an airplane is a lot less exciting than the killing machines or visions of a bad future that are sometimes linked to artificial intelligence and its rise in the U.S. military.

Defense industry leaders, on the other hand, say that such seemingly insignificant tasks are a perfect example of the tangible, everyday capabilities that AI and machine learning can provide, and that the growing partnership between humans and machines will lead to a more effective and safer fighting force.

The rapidly expanding role of artificial intelligence in today’s military is often much less exciting than its critics suggest, but more important than most realize.

From predicting equipment failures on an F-16 before they happen to sorting through a mountain of data to correct overhead video in real time for a US special forces team, the rapidly growing role of artificial intelligence in today’s military is often much less exciting than its critics say, but more important than most people realize.

Instead of avoiding a discussion about the potential moral ramifications of AI and its many applications in warfighting, industry insiders believe that it would be foolish — if not immoral — to ignore the technology when it can do so much good in the proper hands.

“These are frequently 18- and 19-year-olds who have had months of experience and training.” “OK, your role at the operational level is to maintain this F-16,” they say.

“I believe it is more ethical to provide them with the tools to assist in the implementation of the appropriate solution rather than rely on them to guess what the problem is,” said Logan Jones, general manager and president of SparkCognition Government Systems, an AI-focused firm devoted to the government and national defense sectors.

Mr. Jones spoke to The Washington Times during a recent U.S. special operations symposium in this city, which brought companies from all over the world, including many that are at the forefront of artificial intelligence and its military uses.

The “digital maintenance adviser,” which is presently in use by the Air Force, is one of Spark’s AI tools that can trawl through massive volumes of data—including handwritten maintenance logs—to help spot problems and suggest fixes in significantly less time than a human brain alone.

“You give someone a tablet and they use it to help them better triage or take symptoms and offer a recommendation on what the problem might be-AI at the edge.” Mr. Jones went on to discuss the ethical debates surrounding AI and what it should and shouldn’t do before addressing the ethical debates surrounding AI and what it should and shouldn’t do.

He claims that the spectacular in the AI debate to date distracts from what is actually useful.

“If you look at the world of possibilities, a healthy argument to have is based on a limited subset of use cases,” he remarked. I believe it detracts from the enormous amount of value that is available today. In the military and national security sectors, there’s a lot of “low-hanging fruit.”

While critics often focus on so-called “killer robots” and existential debates about whether AI can determine the value of human life, deep within the Pentagon, the focus is usually on how machines can quickly go through data, process reports, scour job applications for the right candidates, sort audio and video files, and perform other routine tasks.

Those missions have turned into a lucrative business. The Pentagon is expected to spend as much as $874 million on AI programs this year, an amount that has climbed considerably in recent years.

Because the Defense Department and its business partners are working on hundreds of AI-related programs, many of which are still highly classified and the details of which won’t be made public, it’s hard to say exactly how much money is being spent.

Pentagon officials appear to be most enthusiastic about AI’s ability to process, analyze, and organize vast amounts of data gathered from multiple sources on or near the battlefield.

Officials think that military units shouldn’t just rely on drone footage because there is so much open-source or commercial satellite imagery and other information on the internet that it is their job to look at it in real time.

The problem comes when you have to get all of that information and figure out what it all means in just a few minutes or seconds.

“How do you combine it without overburdening the operator,… such that the operator has a holistic degree of confidence without having to perform all of the work?” During a question-and-answer session at last month’s Tampa convention, James Smith, acquisition executive for US Special Operations Command, told reporters

“What artificial intelligence could bring to bear on that really interesting problem is providing a very easy user interface to the operator to say, ‘Here’s a level of confidence about what you’ll observe on this terrain,'” he added.

The humans and the machines

For doubters, the rise of AI and the potential deployment of autonomous weapons present severe moral concerns, and governments throughout the world should implement strict new regulations to prevent their use.

For example, the Campaign to Stop Killer Robots has been an important international voice in the US, UK, and other major powers’ efforts to limit the use of AI in the military.

In a recent statement, Clare Conboy, the campaign’s media and communications manager, said, “It’s time for government leaders to draw a legal and moral line against the slaughter of humans by machines and commit to negotiating a new international law on weapons system autonomy.”

Industry leaders, on the other hand, believe that even at today’s cutting-edge, killer robots will never develop their own minds and begin killing people.

According to Brandon Tseng, co-founder and president of Shield AI, “there will be a human on the loop all day, every day.”

“I believe that many individuals go straight to Hollywood and think, ‘worst-case scenario.'” But there’s a lot of nuance in between,” he explained. “Making a system secure necessitates a lot of technology and engineering.”

“Self-driving technologies for unmanned systems,” Mr. Tseng said of his company’s portfolio. Shield AI, he explained, specializes in unmanned systems that use a program called “Hivemind” to operate without GPS.

This technology enables military personnel to assign the system a mission before allowing the machine to complete its task. In other words, because the system can carry out motions and make judgments on its own, there is no need for human hands on a joystick to control the machine’s every move while it scours a building for hostages, for example.

“You just want to tell it what to do, and it should carry out that objective,” he explained.

Mr. Tseng says that when the company’s AI technology is put into a fighter plane, it can make a pilot with decades of flight experience in just a few days.

“They will develop into fantastic pilots, capable of achieving things that no one else could dream of,” he predicted.

Although such technology may appear frightening to some, Mr. Tseng and other supporters argue that AI-piloted planes can take more risks and perform more daring maneuvers than human-piloted planes, all without endangering the lives of actual pilots.

Mr. Tseng stated, “This is where the battlefield is heading.”

Aside from the fight itself, AI will be a key part of making sure that the information shown on the screens of military personnel is accurate.

Craig Brower, president of EdgyBees’ U.S. government operations, said his company aspires to “make the world accurate” by using artificial intelligence to fix raw video footage used by troops, firefighters, police, and others on the front lines.

Although satellite footage appears to be precise to the naked eye, “it can be wrong by 20 to 120 meters,” according to Mr. Brower. Humans are used to performing such repairs and verifications in arduous, time-consuming operations that may waste important time.

“What the technology is doing is our AI machine learning is looking at that video feed as it comes in in real time and detecting control points across that scenario,” Mr. Brower explained in a conversation just off the Tampa convention center floor. “Then it’s a matter of mapping those control points to an image and elevation base,” says the author.

He claims that the video edits are made “practically instantly,” which would be critical for a time-sensitive military mission.