Firecrawl
Browse the web and get markdown outputs.
Authentication
This connector uses Token-based authentication.
info
Set up your connection in the Abstra Console before using it in your workflows.
How to use
Using the Smart Chat
Execute the action "CHOOSE_ONE_ACTION_BELOW" from my connector "YOUR_CONNECTOR_NAME" using the params "PARAMS_HERE".
Using the Web Editor
from abstra.connectors import run_connection_action
result = run_connection_action(
connection_name="your_connection_name",
action_name="your_action_name",
params={
"param1": "value1",
"param2": "value2"
})
Available Actions
This connector provides 20 actions:
| Action | Purpose | Parameters |
|---|---|---|
| post_scrape | Scrape a single URL and optionally extract information using an LLM | data (undefined) required |
| post_batch_scrape | Scrape multiple URLs and optionally extract information using an LLM | data (undefined) required |
| get_batch_scrape_by_id | Get the status of a batch scrape job | id (string) |
| delete_batch_scrape_by_id | Cancel a batch scrape job | id (string) |
| get_batch_scrape_by_id_errors | Get the errors of a batch scrape job | id (string) |
| get_crawl_by_id | Get the status of a crawl job | id (string) |
| delete_crawl_by_id | Cancel a crawl job | id (string) |
| get_crawl_by_id_errors | Get the errors of a crawl job | id (string) |
| post_crawl | Crawl multiple URLs based on options | data: { . url (string) . excludePaths (array) . includePaths (array) . maxDepth (integer) . maxDiscoveryDepth (integer) . ignoreSitemap (boolean) . ignoreQueryParameters (boolean) . limit (integer) . allowBackwardLinks (boolean) . allowExternalLinks (boolean) . delay (number) . webhook (object) . scrapeOptions (object) } (object) required |
| post_map | Map multiple URLs based on options | data: { . url (string) . search (string) . ignoreSitemap (boolean) . sitemapOnly (boolean) . includeSubdomains (boolean) . limit (integer) . timeout (integer) } (object) required |
| post_extract | Extract structured data from pages using LLMs | data: { . urls (array) . prompt (string) . schema (object) . enableWebSearch (boolean) . ignoreSitemap (boolean) . includeSubdomains (boolean) . showSources (boolean) . scrapeOptions (object) . ignoreInvalidURLs (boolean) } (object) required |
| get_extract_by_id | Get the status of an extract job | id (string) |
| get_crawl_active | Get all active crawls for the authenticated team | No parameters |
| post_deep_research | Start a deep research operation on a query | data: { . query (string) . maxDepth (integer) . timeLimit (integer) . maxUrls (integer) . analysisPrompt (string) . systemPrompt (string) . formats (array) . jsonOptions (object) } (object) required |
| get_deep_research_by_id | Get the status and results of a deep research operation | id (string) |
| get_team_credit_usage | Get remaining credits for the authenticated team | No parameters |
| get_team_token_usage | Get remaining tokens for the authenticated team Extract only | No parameters |
| post_search | Search and optionally scrape search results | data: { . query (string) . limit (integer) . tbs (string) . location (string) . timeout (integer) . ignoreInvalidURLs (boolean) . scrapeOptions (object) } (object) required |
| post_llmstxt | Generate LLMs.txt for a website | data: { . url (string) . maxUrls (integer) . showFullText (boolean) } (object) required |
| get_llmstxt_by_id | Get the status and results of an LLMs.txt generation job | id (string) |