Locust Directory

Login and Crawl All Pages

Login to a site and crawl all the pages on the index page.

Overview

This test starts with a login and a request to the index page of the site. It then finds all anchor tags (a href) and randomly browses through them.

Code

import random
from locust import HttpUser, TaskSet, task
from pyquery import PyQuery

class AwesomeUser(HttpUser):    
    def login(l):
        l.client.post("/login", {
            "username":"EXAMPLE_USER", 
            "password":"PASSWORD"
        })

    def index_page(self):
        r = self.client.get("/")
        pq = PyQuery(r.content)
        
        link_elements = pq("a")
        self.toc_urls = []
        
        for l in link_elements:
          if "href" in l.attrib:
            self.toc_urls.append(l.attrib["href"])

    def on_start(self):
        self.login()   
        self.index_page()

    @task
    def load_page(self):
        url = random.choice(self.toc_urls)
        r = self.client.get(url)


This guide is part of the LoadForge Directory, an index of locustfile's for use with LoadForge website and API load tests. We also provide a wizard to generate tests, and onboarding assistance for clients. Contact us should you have any questions.

LoadForge provides load testing and stress tests for websites, APIs and servers. It uses your cloud account to rapidly scale large numbers of simulated users to load test your website, store, API, or application for cheap - just cents per test!

For more help on Tests please see our official documentation. Logged in users can also use our wizard to generate a locustfile, or you can record your browser steps.

Ready to run your test?
Start your first test within minutes.