I have three dogs at home. The youngest, who is still a puppy, is threatening to demolish our living room couch. I’d like to know when my furry terminator gets into demolition mode so that I can salvage the situation.

The trouble is that we cannot monitor the living room round the clock. While I can arm the webcam with motion detection capabilities, I need to be alerted about a specific kind of motion. The webcam should be clever enough to distinguish the movement of a couch-eating dog from that of a well-meaning vacuum cleaner. Could AI help me take swift decisive action?

So far, AI relies on a central processing unit that functions as the brain of the system where deep learning resides. Such an architecture demands massive compute power that is supplied by major cloud providers who also wear the AI provider’s cap. This architecture will inevitably become unwieldy as large AI platforms become connected with countless clients. Imagine a drone with computer vision, sending images thousands of miles away to handle a life threatening situation. There is something fundamentally dysfunctional about this approach in the long run.

Not surprisingly, the AI industry focus is moving to the edge. This allows IoT devices such as cameras, phones, home appliances to process the information locally without having to push data to a central AI platform in the cloud. For instance, Amazon’s AWS DeepLens camera allows you to run deep learning models locally on the camera. The training models are pushed to the devices for predicting next best action locally.

At the edge, context is the driving force. The devices and intelligent apps will take quick decisions based on what they learn from their immediate surrounding. The smart camera in my living room could refine learning models based on my dog’s actions just before it attacks the couch. It could correlate time of year and day, with his activities to better predict his destructive tendencies. What matters is the extreme sensitivity to changing conditions. Google calls this federated learning that decouples machine learning at the edge from data storage in the cloud. Hence the AI architecture will steadily adapt to distributed processing to improve latency and real-time computation.

AI at the edge will help breed a whole slew of startups, spurring innovation at the edge that will transform the AI industry as a whole. The intelligent apps and devices will accelerate the maturing of learning models through their contributions of contextual data. A self driving car learns and adapts on its own, and at the same time sends valuable data to the company that updates a consensus learning model for all cars to leverage.

AI startups occupying the edge are well positioned to challenge tech giants. Fast.ai, a small Silicon Valley nonprofit lab that uses part-time students, developed a deep-learning algorithm that outperformed Google’s algorithm in a benchmark exercise by Stanford scientists. The benchmark employed a common image classification task to track the speed of a deep-learning algorithm per dollar of compute power. These will redefine business models and service delivery just as the apps ecosystem forced the industry to rethink public transportation, media and hospitality services. My living room couch facing existential challenges could rely on AI at the edge to curb its canine enemy.

Shalini Verma is the CEO of PIVOT technologies.