Skip to main content

Documentation Index

Fetch the complete documentation index at: https://intunedhq.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

In this quickstart, you’ll build a scraper by hand—writing the code, deploying it, configuring a Job, and running it end to end. It’s designed to walk you through the core platform concepts: Projects, APIs, Runs, Jobs, and deployment. If you’d rather skip the manual steps and have AI build your scraper, see the Intuned Agent quickstart.

Prerequisites

  • An active Intuned account (sign up here). No credit card required—Intuned has a free plan
  • Basic familiarity with TypeScript or Python

Create and deploy your first scraper

You can develop Intuned Projects in two ways:
  • Hosted projects (online IDE) — Zero setup. Write, test, and deploy directly from your browser.
  • Connected projects (local CLI) — Develop locally with full version control and CI/CD integration.
Choose your preferred approach below.

Log in and create project

  1. Go to app.intuned.io/projects and log in.
  2. Select Create Project.
  3. Select your language (TypeScript or Python).
  4. Choose the e-commerce-scrapingcourse template.
  5. Name it ecommerce-scraper-quickstart.
  6. Ensure Hosted project is selected as Type.
  7. Select Create and Open.
Create Project ScreenshotExpected result: The Intuned IDE opens with your project loaded.
What you just got: An Intuned Project groups related browser automations together. Each file in the api/ folder becomes a callable function that controls a browser using Playwright, accepts parameters, and returns structured results. When you deploy this project, all its APIs go live together as a single deployable unit.

Explore the project code

In the file explorer, you’ll see two API files:api/list - Navigates the e-commerce site, extracts product info from all pages, and triggers details for each product found.api/details - Visits each product page and extracts detailed information (price, SKU, descriptions, variants).
These two APIs work together—list discovers products and triggers details for each one using extendPayload. This pattern works well for job runs where the scope of work is determined at runtime, allowing your automation to adapt to whatever data it discovers.

Run your scraper in Hosted Projects

Test the scraper’s list API to see it working in real-time.
  1. In the top toolbar, select list from the API dropdown.
  2. Select Params #1 next to it—you’ll see empty params {}.
  3. Select the Run button. Run API IDE
Expected result: The browser panel on the right shows the list scraper executing live. You’ll see it navigate through all product pages, extract data, and paginate automatically. The terminal below what executed and the result of the Run.Extended payloads: The IDE also displays a link to view the extended payloads created from this run. For each product found, you’ll see a payload containing the API name details and the product parameters. These payloads represent additional runs that execute when running in a Job context.

Deploy your project

Deploy your scraper to Intuned’s infrastructure.
  1. Select the Deploy button in the top-right corner of the IDE.
  2. In the deployment dialog, select Deploy to start.
  3. Watch the live deployment logs until you see “Ready”.
Expected result: A success message appears. Your scraper is now live and ready to run.

Test in the Playground

Now test your deployed scraper through the API Playground.
  1. In the deployment success dialog, select Run in Playground.
  2. In the Playground, you’ll see your deployed API list ready to call.
  3. The request body is pre-filled with test parameters. Select Start Run to execute.
The Playground is just an interactive way to test your deployed automation APIs. Your scraper is now callable from anywhere via API—use it from your application, service, or any HTTP client. See the API Reference for authentication and programmatic usage.
Expected result: The scraper executes on Intuned’s infrastructure. You’ll see the run details and results in real-time.Your scraper is now deployed and fully operational.

What’s next?

  • Jobs — Jobs are the common way to run scrapers. Configure a schedule (daily, hourly, or custom) and define a sink to send your scraper results to a webhook, S3 bucket, or other destination.
  • Authentication — For scrapers that require login, Intuned provides built-in authentication support. You define how to log in and how to verify a session, and Intuned handles the rest—validating sessions before runs, reusing them when possible, and recreating them when expired.
  • Monitoring and traces — Every run generates detailed logs, browser traces, and session recordings. Use these tools to debug failures, verify your scraper is working correctly, and understand what happened during execution.
  • Flexible automations — Build scrapers your way. Write deterministic code, use AI-driven extraction, or combine both in a hybrid approach. Use any library or package—Intuned is unopinionated by design.
  • Intuned Agent quickstart — You can write your scraper logic manually like in this quickstart, or use Intuned Agent to build it from a prompt. Intuned Agent can also help you update existing projects, fix failed runs, and iterate on your code faster.
  • Cookbook — Browse full working examples of scrapers and other automations. Each example includes complete code you can use as a starting point for your own projects.
  • Online IDE — Learn more about the Intuned IDE used in this quickstart.
  • Local development (CLI) — Learn more about the Intuned CLI used in this quickstart.