Author: ToolMinT

  • 3 Cursor Scripts That Save Me 2 Hours a Week — No Coding Background Needed

    3 Cursor Scripts That Save Me 2 Hours a Week — No Coding Background Needed

    The three specific automations that justified my Cursor subscription — with the exact prompts I used and the mistakes that wasted my first attempts.

    Content mode: Tested

    Two months ago I wrote about setting up Cursor as a non-developer freelancer. Since then, three scripts have survived into weekly use. Together they save me roughly two hours per week on tasks I used to do by hand: cleaning messy client CSV exports, renaming deliverable files into organized folders, and pulling public pricing data into comparison spreadsheets.

    None of these required coding knowledge to build. Each one took under 30 minutes with Cursor’s AI assist. But each one also failed on the first attempt because I made the same prompting mistake — and fixing that mistake is the most useful thing I can share.

    3 Scripts — Time Savings Summary

    CSV Cleaner: 15 min → 10 sec per file, ~8 files/week

    File Renamer: 20 min → 3 sec weekly sort, 90% auto-sorted

    Pricing Scraper: 35 min20 min per comparison, ~2/month

    Combined weekly savings: ~2 hours

    Code on a computer screen
    Photo by Rob Wingate on Unsplash

    Script 1: The CSV cleaner that handles every client’s export format

    The problem: I get CSV exports from four different clients — HubSpot, Salesforce, Google Sheets, and one client who apparently uses a custom CRM from 2014. Every export has different column names, date formats, and encoding quirks. Manually normalizing these for analysis used to take 15–20 minutes per file, and I process about eight files per week.

    The Cursor prompt that worked:

    Write a Python script that reads a CSV file, detects the delimiter

    automatically, standardizes column names to lowercase with underscores,

    converts all date columns to YYYY-MM-DD format, removes completely empty

    rows, and saves the output as a clean UTF-8 CSV with the suffix “-clean”.

    The script handles my four client formats without modification. I drop a CSV into the folder, run the script, and get a cleaned version in under two seconds. Time per file went from 15 minutes to about 10 seconds.

    The mistake I made first: my initial prompt was “clean up my CSV files.” Cursor generated a script that assumed a specific column structure. When I fed it a different client’s export, it crashed. The fix was being explicit about what cleaning means — automatic delimiter detection, lowercase columns, date normalization — instead of assuming Cursor would infer my needs.

    Script 2: The file renamer that sorts deliverables by client and date

    The problem: by Friday each week, my Downloads folder has 30–40 files — client feedback PDFs, reference images, revised drafts, invoices. I used to spend 20 minutes every Friday dragging files into Client Name/YYYY-MM/ folders. Miss a week and the backlog doubles.

    The prompt that worked:

    Write a Python script that scans a folder for files, identifies the

    client name from the filename or parent folder name using a lookup dict

    I’ll provide, moves each file to a target directory structure of

    ClientName/YYYY-MM/ based on the file’s modification date, and logs

    every move to a JSON file. Skip files that don’t match any client pattern.

    I added a simple dictionary mapping filename patterns to client names (e.g., files starting with “acme” go to the “Acme Corp” folder). The script runs in about three seconds and correctly sorts 90% of files. The 10% it skips — files with ambiguous names — I sort manually, which takes two minutes instead of twenty.

    The mistake I made first: I asked Cursor to “organize my files intelligently.” The AI tried to use NLP to detect client names from file content, which was absurdly over-engineered for my needs. A simple pattern-matching dictionary was all I needed. Lesson: tell Cursor the simplest approach that would work, not the smartest one.

    “Tell Cursor the simplest approach that would work, not the smartest one.”

    Script 3: The pricing scraper that feeds my comparison spreadsheets

    The problem: for competitive analysis deliverables, I need current pricing from 5–10 SaaS tools. Manually visiting each pricing page, noting the tiers, and entering them into a spreadsheet takes 30–40 minutes per comparison. I do roughly two of these per month.

    The prompt that worked:

    Write a Python script that takes a list of URLs from a JSON file,

    fetches each page’s HTML, extracts text content from pricing-related

    sections (look for elements containing “pricing”, “price”, “plan”,

    “month”, “$”), and saves the extracted text to a timestamped JSON file

    with the URL as the key. Use requests with a 10-second timeout and skip

    URLs that return errors.

    This one is the roughest of the three — it doesn’t parse pricing into structured data, it just extracts the relevant text sections. But that’s enough. I review the extracted text, pull the numbers into my spreadsheet, and verify against the actual page. The extraction step saves about 15 minutes per comparison because I’m reading pre-filtered text instead of navigating through marketing pages.

    The mistake I made first: I asked for a script that would “extract and compare pricing automatically.” Cursor generated something that tried to parse dollar amounts, tier names, and feature lists into a structured table. It worked on two out of ten sites and hallucinated data on the rest. The simpler approach — extract raw text and let me do the interpretation — is less impressive but actually reliable.

    The prompting pattern that fixed all three scripts

    The common mistake across all three first attempts: I described the outcome I wanted instead of the mechanism I needed. “Clean up my CSVs” vs. “detect delimiter, lowercase columns, normalize dates.” “Organize my files” vs. “match filename patterns to a lookup dictionary.” “Compare pricing” vs. “extract text from pricing-related HTML elements.”

    Cursor’s AI is good at writing code for well-specified tasks. It’s mediocre at inferring what you actually need from vague descriptions. As a non-developer, my instinct was to describe the problem and let the AI figure out the solution. That instinct was wrong.

    The pattern that works: describe the input (what the script receives), the transformation (the specific operations, in order), and the output (what the script produces and where it saves). Skip the “why” — Cursor doesn’t need context about your workflow, it needs technical specifications.

    Person typing on a laptop at a wooden desk
    Photo by Vitaly Gariev on Unsplash

    What I’ve learned about Cursor’s limits for non-developers

    After two months of weekly use, here’s where Cursor excels and where it struggles for someone without a coding background:

    Works well: File manipulation, data formatting, text extraction, pattern matching. Anything where the input and output are clearly defined files. I’d estimate 80% of my automation needs fall into this category.

    Works poorly: Anything requiring ongoing interaction — scripts that need user input during execution, tools that should run in the background, or automations that depend on third-party APIs with authentication. I’ve tried building a simple email parser and a Notion integration, both of which required troubleshooting I couldn’t do without understanding the error messages.

    The maintenance question: Scripts break when the inputs change. My CSV cleaner needed one update when a client changed their export format. The file renamer needed a new pattern when I onboarded a new client. Each fix took about 10 minutes in Cursor. But I can see a future where maintaining ten scripts becomes its own time sink. For now, three scripts is manageable.

    For me, Cursor Pro justifies its $20/month with exactly these three scripts. Two hours saved per week at my rate covers the subscription several times over. But the value is concentrated — remove any one of the three and the math gets tighter.

    The uncertainty is whether I’ll find the fourth and fifth scripts that keep the ROI growing, or whether I’ve already picked the easy wins and the remaining automation opportunities require more technical skill than Cursor can bridge. I’m giving it one more quarter to find out.

    If you’re a non-developer freelancer considering Cursor, start with your most repetitive file-handling task. Write the prompt using the input-transformation-output pattern. Build one script, use it for two weeks, then decide if it’s worth continuing. Don’t try to automate everything at once — the first three wins will tell you whether Cursor fits your workflow.

    FAQ

    Do I need to know Python to use these scripts?

    No. I don’t write Python — Cursor generates it. I describe what I want, Cursor writes the code, and I run it. When something breaks, I paste the error message back into Cursor and ask it to fix the issue. That said, a basic understanding of file paths and command-line execution helps. I spent about an hour learning those basics in my first week.

    Can I build these same scripts with ChatGPT instead of Cursor?

    Yes, but with more friction. ChatGPT generates code in the chat window that you then copy into a file, save, and run manually. Cursor lets you generate, edit, and run the code in one environment. For a single script, the difference is minor. For iterating on a script that doesn’t work on the first try, Cursor’s integrated workflow saves significant time.

    How do I run these scripts on a schedule?

    It depends on your operating system. On Mac, I use cron jobs — Cursor helped me set those up too. The CSV cleaner runs every time I save a file to a specific folder. The file organizer runs every Friday at 5pm. The pricing scraper I trigger manually because I only need it for specific projects.

    What happens when a script breaks?

    Not yet a major issue for me. In two months, I’ve had three breakages: one from a client changing their CSV format, one from a website redesigning their pricing page, and one from a Python update. Each took 5–15 minutes to fix by pasting the error into Cursor and asking for a correction. If breakages become more frequent as I add scripts, I’ll reassess.

    Sources


    AI-assisted research and drafting. Reviewed and published by ToolMint. Last updated: 2026-04-25.

  • I Keep a Perplexity Space for Every Client. It Beats Bookmarks by a Mile.

    I Keep a Perplexity Space for Every Client. It Beats Bookmarks by a Mile.

    How one Perplexity feature turned scattered client research into a persistent knowledge base — and cut my proposal prep time by two-thirds.

    Content mode: Tested

    Three weeks ago a dormant client came back with a new project. I hadn’t worked with them since January. In the old days, that meant 90 minutes of re-research: scanning their industry, refreshing competitor intel, finding what changed in three months. Instead, I opened their Perplexity Space and found every research thread, curated source, and follow-up question exactly where I’d left them. I had a draft proposal outline in 25 minutes.

    I’ve been using Perplexity for about a year. Spaces — their persistent collection feature — changed how I organize client research entirely. Here’s the setup that works and the mistakes I made getting there.

    One Space per client creates a living research archive

    The concept is simple: Perplexity Spaces let you group threads, pin sources, and continue research across sessions without losing context. I maintain one Space per active client. Each Space contains industry queries, competitor analyses, past research threads, and pinned URLs I reference repeatedly.

    The shift in workflow is fundamental. Before Spaces, each research session started from zero. I’d re-run the same competitor queries, re-find the same industry reports, re-verify the same data points. Now every query builds on the last. When I research a client’s competitor in March, that thread is still there in June — with sources, follow-up questions, and my notes intact.

    I currently maintain seven Spaces: five for active retainer clients, one for prospects in my pipeline, and one personal Space for AI industry tracking (which feeds into ToolMint content). The retainer client Spaces get updated weekly. The prospect Space gets a burst of activity during pitch prep, then goes quiet.

    My 7 Spaces (April 2026)

    5 retainer clients — updated weekly

    1 prospect pipeline — burst activity during pitches

    1 personal (AI industry) — feeds ToolMint content

    Average pinned sources per Space: 5–10 URLs

    A desk with papers, pens, and a highlighter
    Photo by Yen Vu on Unsplash

    The structure that works: three layers per Space

    After experimenting with different organizational approaches, I settled on a three-layer structure inside each client Space:

    Layer 1 — Standing queries. These are the evergreen research threads I update monthly: “[Client industry] trends 2026,” “[Client’s top 3 competitors] recent news,” “[Client product category] pricing changes.” I re-run these threads on the first Monday of each month. Perplexity surfaces new sources while keeping the old context.

    Layer 2 — Project-specific threads. When a new deliverable comes in, I open a fresh thread inside the client’s Space. All research for that specific project lives here. When the project ships, the thread stays as an archive — useful when similar work comes back six months later.

    Layer 3 — Pinned sources. Every client Space has 5–10 pinned URLs: their pricing page, their main competitors’ homepages, industry benchmarks I reference repeatedly, and the most authoritative source I’ve found for their vertical. Pinning keeps these one click away instead of buried in browser bookmarks.

    “Every query builds on the last. When I research a client’s competitor in March, that thread is still there in June — with sources and follow-up questions intact.”

    Proposal prep went from 90 minutes to under 30

    The most measurable impact is on proposal preparation. My pre-Spaces workflow for a new pitch:

    1. Google the prospect’s industry for 20 minutes
    2. Find and read 3–5 competitor sites (15 minutes)
    3. Search for recent news and trends (15 minutes)
    4. Compile notes into a research brief (20 minutes)
    5. Start the actual proposal (20+ minutes)

    Total: 90+ minutes before writing a single proposal sentence.

    My Spaces workflow: open the prospect Space (or create one with three seed queries), review the curated context from prior research sessions, draft the proposal. If it’s a returning client, steps 1–4 are already done. If it’s a new prospect, the three seed queries take 10 minutes and Perplexity’s source citations mean I’m verifying as I go instead of in a separate pass.

    For returning clients, prep time dropped from 90 minutes to about 15. For new prospects, it’s about 30 minutes. I do four to six proposals per month, so the monthly time savings is somewhere between 4 and 7 hours.

    Where Spaces fall short: real-time monitoring

    Spaces are excellent for on-demand research but mediocre for ongoing monitoring. I tried using a Space as a “news feed” for one client’s industry, checking it daily for updates. The problem: Perplexity doesn’t proactively surface new information in a Space. You have to manually re-run queries or open new threads to get fresh results.

    For real-time monitoring, I still use Google Alerts (free, automatic, email delivery) supplemented by RSS feeds in Feedly. When an alert surfaces something worth researching deeper, I bring it into the relevant Perplexity Space for analysis. The Space is the thinking layer, not the monitoring layer.

    This is the main limitation I’d want Perplexity to address. If Spaces could notify me when new high-relevance sources appeared for my standing queries, it would replace Google Alerts entirely. As of April 2026, it doesn’t do this.

    The mistakes I made setting up Spaces

    Mistake 1: Too many Spaces. I initially created a Space for every prospect, including cold leads I’d never contacted. Within a month I had 15 Spaces, most with one or two threads. I consolidated down to seven and now only create a new Space when a prospect reaches the proposal stage.

    Mistake 2: Not pinning sources early. For my first three client Spaces, I relied on thread history alone. Finding a specific URL meant scrolling through weeks of conversations. Once I started pinning the five to ten most-referenced sources per client, navigation improved dramatically.

    Mistake 3: Treating Spaces like folders. Spaces aren’t file storage — they’re research sessions with memory. The value comes from continuing conversations, not from organizing static documents. When I started treating each standing query as a living thread instead of a reference file, the quality of Perplexity’s follow-up responses improved noticeably.

    The cost analysis for freelancers

    Perplexity Pro costs $20/month. My usage pattern: roughly 30–40 queries per week across seven Spaces, with heavier usage during pitch weeks.

    The direct ROI: 4–7 hours saved monthly on proposal research alone. The indirect ROI — having curated competitive intelligence ready for client strategy meetings, catching industry shifts that inform retainer work — is harder to quantify but consistently valuable.

    Could I replicate this with free tools? Partially. Google + bookmarks + a note-taking app covers the raw research. But the integration — sourced answers that build on prior context within a persistent collection — is what makes the workflow fast. Rebuilding that manually would cost more time than $20/month.

    Browser search bar with suggestions
    Photo by Zulfugar Karimov on Unsplash

    For me, Perplexity Spaces turned client research from a repeated cost into a compounding asset. Every query I run today makes next month’s research faster. That’s the mental model that makes the subscription worth it — not any single query’s value, but the accumulated context that builds over months.

    The structural uncertainty: Perplexity’s business model depends on continued access to web sources, which some publishers are challenging legally. If source access narrows, the citation quality that makes Spaces trustworthy could degrade. I’m not concerned enough to change my workflow today, but it’s worth watching.

    If you’re a freelancer who does research for more than two clients, start with one Space for your highest-volume client. Run the three-layer structure for a month. If you find yourself opening that Space before opening Google, it’s working.

    FAQ

    Do Perplexity Spaces work on the free tier?

    No. Spaces require Perplexity Pro at $20/month. The free tier allows individual queries but no persistent collections or thread organization.

    How many Spaces can I create?

    No strict limit as of April 2026, but performance is best with under 20 active Spaces. I keep seven and archive or delete Spaces for completed projects quarterly.

    Can I share a Space with a client?

    Not yet. Spaces are currently single-user. If you need to share research with a client, export the key findings into a document. I typically copy the most relevant thread summaries into a Google Doc or Notion page for client visibility.

    Is Perplexity Spaces better than Notion for research?

    It depends on what you mean by “research.” For discovery — finding new information, verifying claims, building source-cited context — Perplexity is faster and more reliable. For organizing existing knowledge, managing project notes, and collaborating with clients, Notion is better. I use both: Perplexity for input, Notion for output.

    How do I handle conflicting sources within a Space?

    Yes, this happens. Perplexity surfaces sources that sometimes disagree — different pricing numbers, conflicting benchmarks, opposing analyses. I pin both conflicting sources and note the discrepancy in a follow-up query. The resolution usually comes from checking the publication dates and going with the most recent authoritative source.

    Sources


    AI-assisted research and drafting. Reviewed and published by ToolMint. Last updated: 2026-04-25.

  • My $102/Month AI Stack: 4 Subscriptions That Pay for Themselves, 1 That Doesn’t

    My $102/Month AI Stack: 4 Subscriptions That Pay for Themselves, 1 That Doesn’t

    A freelance consultant breaks down five AI subscriptions totaling $102/month — the ROI math on each one, and the one that might not survive the next quarterly audit.

    Content mode: Tested

    I opened my credit card statement last Sunday and counted: $102 per month on AI tools. Claude Pro at $20, ChatGPT Plus at $20, Notion AI add-on at $10 on top of $12 for Notion Plus, Perplexity Pro at $20, and Cursor Pro at $20. That’s $1,224 a year — roughly the cost of a decent laptop — flowing to five different companies for capabilities that partially overlap.

    So I ran a cold audit. Each subscription got the same test: hours saved per month times my effective hourly rate, minus the subscription cost. Four passed clearly. One is on notice.

    The $102/Month Stack (April 2026)

    Claude Pro: $20/month — long-form drafting

    ChatGPT Plus: $20/month — short-form speed tasks

    Notion AI: $10/month (+ $12 Notion Plus) — meeting processing

    Perplexity Pro: $20/month — research with citations

    Cursor Pro: $20/month — file automation scripts

    Total: $102/month | $1,224/year

    A laptop on a table
    Photo by PiggyBank on Unsplash

    Claude Pro earns back its cost in two client documents

    Claude handles my long-form drafting — proposals, strategy documents, research briefs over 1,000 words. I’ve been paying for Claude Pro for six months and the pattern is consistent: what used to take 60 minutes of writing from scratch now takes 20 minutes of editing Claude’s first draft.

    I produce roughly 12–15 long-form documents per month. At 40 minutes saved per document, that’s 8–10 hours monthly. Even at a conservative blended rate, the ROI is somewhere around 25:1. Claude Pro isn’t just paying for itself — it’s the highest-returning subscription in the stack.

    The specific advantage over ChatGPT for this work: Claude maintains coherent voice and structure past the 2,000-word mark. ChatGPT tends to drift or repeat itself in longer documents. For short brainstorming under 300 words, ChatGPT is faster. But my client deliverables live in the long-form territory where Claude dominates.

    ChatGPT Plus pays for itself as a speed tool, barely

    ChatGPT’s value in my stack is narrow but real: fast iteration on short-form content. Email subject lines, meeting agenda drafts, quick reframes when I’m stuck on a sentence. These are 2–5 minute tasks where ChatGPT’s faster response time matters more than depth.

    I estimate 15–20 of these micro-tasks per week, saving maybe 3–5 minutes each. That’s roughly 4–6 hours per month — enough to justify $20, but with less margin than Claude.

    The honest concern: ChatGPT and Claude increasingly overlap. GPT-4o handles longer content better than GPT-4 did. Claude’s Haiku model handles quick tasks faster than it used to. If I had to cut one subscription tomorrow, ChatGPT Plus would be the candidate — not because it’s bad, but because Claude covers 70% of what I use ChatGPT for.

    “If I had to cut one subscription tomorrow, ChatGPT Plus would be the candidate — not because it’s bad, but because Claude covers 70% of what I use it for.”

    Notion AI is the subscription I forget I’m paying for — in a good way

    Notion AI costs $10/month on top of my $12/month Notion Plus plan. That $10 buys three features I use daily without thinking about them: meeting transcript summarization, action item extraction, and inbox triage across my client databases.

    The ROI is hard to quantify precisely because Notion AI’s value is embedded in workflows I’d be doing anyway. But here’s one concrete number: I process about eight client meetings per week. Each meeting recap used to take 12–15 minutes of manual note formatting. Notion AI cuts that to 3–4 minutes. That’s roughly 80 minutes saved per week on meeting processing alone.

    The compounding effect matters too. When action items are extracted automatically, I catch follow-ups I used to miss. Last month I traced two on-time deliverables directly to Notion AI surfacing tasks I’d have buried in my notes.

    At $10/month for this level of integration, Notion AI is the subscription I’d keep last.

    Perplexity Pro is the one tool that changed an entire workflow

    Before Perplexity, my client pitch research workflow was: open 20 browser tabs, read for 90 minutes, manually compile notes, then start drafting. Now it’s: open Perplexity, run three focused queries with source citations, review the top sources for 15 minutes, start drafting. The whole process went from 90 minutes to 30 minutes — about a year ago, and the time savings has been consistent since.

    I do pitch research for three to five new prospects per month, plus ongoing competitive monitoring for retainer clients. At roughly 60 minutes saved per research session and four to six sessions monthly, Perplexity Pro saves me 4–6 hours per month.

    The source citation feature is the specific differentiator. I need to verify claims before putting them in client deliverables. Perplexity shows me where each fact came from, which cuts verification time in half compared to synthesizing across ten browser tabs manually.

    Cursor Pro is the subscription on probation

    I’ve been using Cursor for two months. As a non-developer freelancer, my use case is narrow: file renaming scripts, CSV cleanup, and occasional web scraping for pricing data. Cursor handles these tasks well — I’ve built three scripts I run weekly that save roughly two hours combined.

    The math: 2 hours saved per month × my rate = roughly 3:1 ROI. That clears the bar, but barely. The issue is frequency. Some weeks I don’t touch Cursor at all. Other weeks I’m in it daily building a new automation. The value is lumpy in a way that makes the subscription feel expensive during quiet stretches.

    I’m keeping it for one more quarter. If my scripting usage stays at current levels, the annual renewal is justified. If it drops — if I exhaust the easy automation wins and plateau — I’ll downgrade to the free tier and use Claude for the occasional script prompt instead.

    The overlap tax: what it costs to pay for the same capability twice

    The uncomfortable truth about a five-tool stack: Claude and ChatGPT share maybe 40% of functionality. Perplexity and Claude’s web search share maybe 20%. I’m paying for the overlapping capabilities twice.

    I calculated the “overlap tax” by logging which tasks I could have done in a different tool at comparable quality. Over four weeks, the answer was roughly $12–15/month in duplicate capability — mostly the ChatGPT/Claude overlap on short-form tasks.

    That’s a real cost, but it’s also the price of having the best tool for each specific use case. Claude is better for long-form. ChatGPT is faster for short-form. Using one for both would save $20/month but cost me quality on half my tasks.

    Two women talk at a table with a laptop
    Photo by yan kolesnyk on Unsplash

    My framework for when to add a new subscription versus expanding an existing one: if the new tool saves at least 3x its monthly cost in time, and the capability gap versus existing tools is measurable (not theoretical), add it. If the gap is “slightly better” or “sometimes faster,” it’s not worth the context-switching cost.

    For me, four out of five subscriptions clear that bar decisively. The fifth — ChatGPT Plus — is the one I’m watching most closely. Not because $20/month matters in isolation, but because the Claude/ChatGPT gap has been narrowing for six months. If Claude’s speed improves one more notch, or if ChatGPT’s long-form quality doesn’t catch up, I’ll consolidate.

    The structural question every solo freelancer should ask quarterly: am I paying for capability I actually use, or for capability I might need? My audit found $102/month of actual usage. Yours might find a different number. Run the math — you might be surprised which tool is the one on notice.

    FAQ

    Should I start with all five subscriptions at once?

    No. Start with one — whichever matches your highest-volume task. For most freelance writers and consultants, that’s Claude or ChatGPT. Add the next subscription only after you’ve confirmed the first one saves measurable time for a full month.

    Is $102/month too much for a solo freelancer?

    It depends on your revenue. At my billing rate, $102/month recovers in roughly the first 3–4 hours of time savings. If your effective rate is under $25/hour, I’d prioritize the top two (Claude + Perplexity) and skip the rest until revenue supports it.

    Can I get by with just free tiers?

    Not yet for professional work. Free tiers have usage caps that break workflows mid-task. I tried running Claude Free for a week — hit the message limit twice during active client work. The productivity loss from stopping mid-draft exceeded the subscription cost.

    Which subscription would you add sixth?

    It depends on my next workflow bottleneck. Right now it would be Otter.ai or Granola for meeting transcription — my current transcript quality is the weakest link in the meeting-to-action-item pipeline. But I won’t add it until I’ve confirmed the time savings math the same way I did for these five.

    Sources


    AI-assisted research and drafting. Reviewed and published by ToolMint. Last updated: 2026-04-25.

toolmint
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.