Anti-Bot
What problem does it target?
Bots such as search engine indexers are considered good, but unauthorized bots are a serious problem in many organizations for the following reasons:
- Scraping - bots can scrape web pages and APIs and companies lose competitive advantage: scraping prices in eCommerce, scraping user posts to repost someplace else. Bots that ignore the robots.txt file usually fall under this category.
- Malicious bulk operations - bots can abuse SMS-sending systems, buy all available festival tickets to later resell, or buy with a large amount of stolen credit cards.
- Security scanning - bots scan for security issues such as vulnerabilities, misconfigurations, and weak passwords, increasing the likelihood of an attack.
- Fake content creation - bots create and operate fake profiles, then post unauthorized ads and political content. Especially popular in social media.
- Waste - Even when bots don’t have malicious intent, they can still use significant server-side traffic, compute, and storage resources, which increases operational costs.
Because of the damages created by bots, many companies are taking anti-bot measures.
What does this solution do?
- Classify bot vs. human traffic
- Block, challenge, or throttle unauthorized bots
- Allow good bots (e.g., search engines)
- Provide analytics and reporting on bot activity
Who is this for?
Many enterprises enable Anti-Bot measures not as a proactive measure but as a reaction to an incident. Optimally, each application should go through a risk assessment to determine if it’s likely to be the target of bots and quantify the potential impact of such incident. However, this process is time-consuming, requires expertise, and competes with other priorities, so many enterprises choose to accept this risk, whether knowingly or unknowingly.
The following application types are more likely to be targeted by bots:
- Bots attacks are limited to internet-facing applications. Internal applications won’t normally consider anti-bot measures.
- applications where user account takeover can result in sensitive data exposure
- applications where user account takeover can result in financial loss, such as financial services, advertising platforms, eCommerce platforms
- Applications having valuable proprietary content or intellectual property that is required to keep a competitive advantage
- Applications with SMS functionality can be abused for profit
- Applications hosting user-generated content viewed by the public, such as social media platforms and website builders
Who might not benefit from this?
- Internal-only applications
- Applications with static, generic (not proprietary) content
- Applications with non-sensitive user content that doesn’t include any financial/payment elements
Pitfalls and remedies
| Pitfall | Remedy |
|---|---|
| Sluggish user experience - Anti-Bot solutions can create latency that makes applications sluggish. The latency might come and go randomly, or appear only during peak times. | If application latency is a key metric: (1) choose vendors that can deal with your required scale, (2) test solution with all required functionality enabled (modules, rules) - additional functionality may increase latency, and (3) monitor application latency continuously and independently |
| Blocking/challenging legitimate users | During the initial learning period and then continuously, check how many sessions were blocked and for what reasons. Identify rules that can be more prone to false positives and tune them to the right sensitivity. |
| Downtime - Some anti-bot solutions are deployed in-line, adding a single point of failure. If the anti-bot solution is unresponsive, the entire system might be unresponsive. | Check if the anti-bot system can be temporarily disabled in case it breaks, and how long the change takes to propagate. |
| Evasion by advanced bots | Create monitor-only rules below the blocking thresholds and sample to ensure the detected activities are authorized. |
Sample products
- Cloudflare Bot Management & Protection
- Human Security (PerimeterX)