Israel’s use of AI during Gaza war raises alarms worldwide:High-tech weapons and AI audio tools track Hamas figures by analysing phone calls and environmental sounds

Artificial Intelligence is no longer just about chatbots and smart assistants — it’s now making life-or-death decisions on the battlefield.
During its latest war in Gaza, Israel rapidly deployed AI-powered tools that revolutionised how it tracked enemies, but also led to civilian casualties and sparked serious ethical questions. According to The New York Times, no country has used AI in live warfare as aggressively or openly as Israel — a preview of what wars of the future might look like.
Drones use AI tracking and could now follow people or vehicles in real time, increasing both accuracy and risk.
AI hunts down Hamas ​​​​In October 2023, Israeli forces were hunting Ibrahim Biari, a senior Hamas figure believed to be hiding in Gaza’s underground tunnel networks.
Conventional intelligence couldn’t locate him. So Israel turned to a new AI-based audio analysis tool developed by its cyber-intelligence unit, Unit 8200. The system analysed Biari’s phone calls, background noise, and environmental sounds to estimate his location. The strike went ahead on October 31, killing Biari, but also over 125 civilians, according to watchdog group Airwars. Hadas Lorber, Holon Institute of Technology (via NYT) said: It led to game-changing technologies on the battlefield and advantages that proved critical in combat. War-tech startup culture Inside Unit 8200, a tech hub known as ‘The Studio’ was created to rapidly test and deploy battlefield AI soldiers and reservists, many from tech giants like Google, Meta, and Microsoft, who worked together to prototype tools for facial recognition, drone targeting, and language analysis. Some major AI experiments included: This chatbot helped Israel analyse regional reactions, even understanding slang and dialects.
But it wasn’t always reliable — sometimes mixing up slang or misidentifying objects (like confusing pipes for guns). Mistakes, misfires, and civilian risk One major issue was false positives. Facial recognition software at checkpoints flagged innocent Palestinians, leading to wrongful arrests, according to intelligence insiders.
Lavender, the target-picking AI, also had flaws — some predictions were based on weak patterns or outdated data. Meanwhile, drones were upgraded to use AI tracking. Instead of locking onto a single image, they could now follow people or vehicles in real time, increasing both accuracy and risk. Aviv Shapira, XTEND CEO (via NYT) highlighted: Now AI can recognise and track the object itself — with deadly precision. War gets smarter, but also darker While Israel says these tools saved lives and sped up intelligence work, some insiders and tech leaders are sounding the alarm. Hadas Lorber (via NYT) said: These technologies raise serious ethical questions. Humans must make the final decisions. The Israeli military says it’s committed to using tech responsibly, but investigations are still ongoing into high-casualty strikes like Biari’s. The bigger picture Israel’s use of AI in Gaza is a wake-up call. These tools bring power, precision, and deep moral dilemmas.
As AI becomes a weapon of war, the world must ask: Should we trust machines to make life-and-death decisions?

Leave a Reply

Your email address will not be published. Required fields are marked *

Enquire now

Give us a call or fill in the form below and we will contact you. We endeavor to answer all inquiries within 24 hours on business days.