Vehicles (Traffic Cam)
Real-time vehicle detection for traffic camera video and live streams

eyepop.vehicle:latest
Model type
Pre-trained Model
Description
Identifies vehicles in traffic camera footage (images, recorded video, or live streams) and returns structured bounding box coordinates with confidence scores. No training. No configuration. Just provide an image or stream and your API key.
This model is optimized for:
- Low latency inference
- High precision vehicle localization in traffic-cam perspectives
- Cloud or On-Prem deployment
- Edge-friendly real-time processing
Why This Model Exists
Most “vehicle detection” pipelines break down for one of two reasons:
- They require custom training for every camera angle, city, or lighting condition.
- They’re too slow (or too brittle) to run continuously on real traffic streams.
This model removes both constraints. The goal is simple:
Give developers reliable vehicle localization from traffic camera viewpoints immediately and without ML overhead.
Bounding boxes are the atomic unit of traffic intelligence systems:
- Counting and classification
- Flow and density measurement
- Lane utilization analysis
- Queue length estimation
- Turning movement analytics
- Tracking (when paired with a tracking layer)
This model provides that atomic unit.
Key Capabilities
Input Types
- Single images
- Video files
- RTSP / livestream feeds
- Webcam streams
Output
- JSON with bounding box coordinates
- Confidence scores
- Frame-level detections (for video)
Deployment
- EyePop Cloud
- On-Premise AI Application Runtime
- Edge devices with GPU or CPU
Setup
- Create account
- Get API key
- Send media
- Receive structured results
No hyperparameters. No labels. No model tuning.
Example Output
{
"objects": [
{
"category": "vehicle",
"classLabel": "car",
"confidence": 0.9134,
"height": 118.22,
"id": 1,
"orientation": 0,
"width": 201.77,
"x": 642.55,
"y": 388.10
},
{
"category": "vehicle",
"classLabel": "truck",
"confidence": 0.8841,
"height": 164.02,
"id": 2,
"orientation": 0,
"width": 278.34,
"x": 412.90,
"y": 361.44
}
],
"source_height": 1080,
"source_width": 1920
}
Practical Use Cases
Below are applications where bounding-box-level vehicle detection is sufficient and powerful.
Traffic Operations & ITS
- Vehicle counts by lane and direction
- Queue length estimation at intersections
- Signal timing support inputs
- Congestion monitoring and alerts
- Turning movement counts (with ROI + aggregation logic)
City Planning & Mobility Analytics
- Traffic density measurement over time
- Corridor performance tracking
- Mode share insights (cars vs. buses vs. bikes—where supported)
- Infrastructure impact analysis after changes (new lanes, closures, detours)
Incident Awareness & Response
- Stopped-vehicle detection inputs (with additional logic)
- Unusual congestion pattern detection
- Lane blockage monitoring
- Event-day traffic monitoring
Autonomous & ADAS Data Workflows
- Pre-labeling support for training datasets
- Rapid scene filtering (find frames with trucks/buses/motorcycles)
- Bounding-box generation for downstream curation pipelines
Why Bounding Boxes Matter
A bounding box is not just a rectangle. It gives you:
- Position (pixel-space location)
- Scale (size proxy)
- Region-of-interest cropping
- Motion vector (when tracked over time)
- Density + occupancy signals (when aggregated by lane/zone)
If you know where vehicles are in pixel space, you can derive everything else with additional logic.
Deployment Options
EyePop Cloud
- Scalable
- Managed infrastructure
- Ideal for web apps and SaaS
On-Premise Runtime
- Data never leaves your network
- Ideal for regulated industries
- Compatible with GPU or CPU servers
- Edge inference capable
Who This Is For
- Developers building traffic analytics or ITS products
- Teams supporting city planning and mobility insights
- Startups prototyping traffic-cam intelligence features
- Integrators deploying vision into existing camera networks
If your system depends on knowing where vehicles are, this model is the fastest way to get there.
Get early access
Want to move faster with visual automation? Request early access to Abilities and get notified as new vision capabilities roll out.