Back to Blog
automationjob searchplaywrightn8nbullmqpostgresopenrouter

How I Used Automation to Focus on Better Job Opportunities

March 20, 2026
Nurhuda Joantama
How I Used Automation to Focus on Better Job Opportunities

Table of Contents

  • Why I started building this in the first place
  • The first version: just scrape and store data
  • How the pipeline grew beyond a simple scheduler setup
  • What I chose not to automate
  • The results that mattered most

I did not build this to automate my life. I built it because I was tired of wasting energy on repetitive job-search work.

After a long holiday in January, I started building a small automation system to help with my job search.

At the beginning, I was not trying to create some huge system. I just wanted to solve a very boring problem: finding jobs with the criteria I actually wanted took too long, I sometimes opened the same page more than once, and too much of my time was going into filtering instead of deciding.

That was the real pain.

Job searching was not hard in one dramatic way. It was hard in the repetitive way. Too many listings, too much manual checking, too much duplicate browsing, and too much effort spent on opportunities that were never a good fit anyway.


1 · The first approach was intentionally simple

The first version was much simpler than the final one.

I used Playwright to collect job data and stored the base data in Postgres. That was it. No fancy orchestration at first, no polished end-to-end system, and definitely no big “AI agent” vision yet.

I mainly wanted a reliable way to collect enough job data so I did not have to keep searching manually across the same platforms every day.

Once that started working, I created separate schedulers for different parts of the flow. It was still rough, but it gave me a structure to build on. Instead of treating everything as one giant pipeline from day one, I kept adding pieces only when I felt the manual process becoming annoying again.

That early version mattered a lot because it taught me something simple: I did not need perfect automation first. I needed useful automation first.


2 · The pipeline changed as the problem became clearer

Over time, the system stopped being “a scraper with a database” and became a set of pipelines.

It was not really one fixed end-to-end flow, because my first approach and final approach were different.

At one stage, I had multiple scheduled steps doing different jobs. The pipeline looked something like this:

  • scrape new job data
  • run scoring 1 for basic filtering and relevance
  • run scoring 2 for a deeper fit check based on what I was looking for
  • prepare follow-up actions from the jobs that passed those filters

For the scoring layer, I used OpenRouter as the platform and used OpenAI and Gemini models to help evaluate which opportunities were actually worth my attention.

That shift was important. Once I moved from “collect everything” to “rank what matters,” the system became much more useful to me personally. The value was no longer in scraping a lot of jobs. The value was in reducing noise.


3 · Eventually the scheduler-heavy setup turned into a BullMQ pipeline

As the project grew, the scheduler-based setup became harder to maintain cleanly.

So the later version moved toward BullMQ for the pipeline orchestration. That gave me a better structure for handling separate stages and made the whole thing feel more like a real system instead of a collection of scheduled scripts.

I also expanded the workflow beyond just collecting and scoring data.

The later pipeline could also:

  • generate email drafts or cover letters when needed
  • send emails for roles that fit that path
  • list opportunities into Notion so I could track them more clearly

That part mattered a lot because the goal was never only “scrape jobs faster.” The real goal was to spend less time on repetitive searching and more time reviewing better opportunities.


4 · Everything ran on my home-server setup

One detail I still like about this project is that the infrastructure was very personal.

I ran it on my own setup under Proxmox. My scraping workload lived on a VM there, and my supporting services like n8n, BullMQ, Redis, and Postgres also lived under that environment.

I liked this setup because it gave me enough control to keep experimenting without turning the project into another monthly cloud bill problem.

It also made the whole thing feel more real. This was not just a few scripts I ran once in a while. It became an actual system I could keep evolving at home.


5 · What I chose not to automate

One important boundary: I never fully automated the final application submission.

That was intentional.

I did not want a very dynamic LLM operating my computer for the final step. The reason was simple: cost and security.

I was comfortable using automation and AI for finding, filtering, ranking, drafting, and organizing. I was not comfortable letting that same layer take full control over the final application flow across dynamic sites.

So even though the system handled a lot of the repetitive work, I still kept the last decision and submission step under my own control.

That boundary made the project more practical for me. It stayed helpful without becoming something I did not fully trust.


6 · The numbers were good, but the focus mattered more

The metrics were strong enough to make the project feel worth it.

At one point, the system was helping me:

  • apply to around 40 jobs a day
  • scrape around 1,000 jobs daily
  • gather listings from platforms like LinkedIn, Indeed, JobStreet, and Dealls
  • reach around 4 to 10 calls each week before I joined my current company

Those numbers were exciting, but they were not the main thing I cared about.

The bigger win was that automation helped me focus on better opportunities.

Instead of spending my energy repeatedly searching, reopening the same listings, and manually filtering everything from scratch, I could spend more time reviewing jobs that already had a stronger chance of being relevant.

That changed the emotional side of job searching too. The process felt less chaotic and more intentional.


7 · The outcome I actually wanted

In March 2026, I joined my current company as a Software Engineer.

I would not say automation “got me the job” by itself. That would be too simplistic.

What it did do was remove a lot of repetitive work that normally drains time and attention during a job search. It helped me spend more of my effort on the parts that actually matter: reviewing the right roles, deciding where to apply, and following through with more focus.

That was the real outcome.

AI did not replace effort. Automation did not replace judgment. But together, they helped me direct both of those things toward better opportunities.


8 · What this project taught me

Looking back, I think the most useful lesson is that small tools can grow into serious systems when they solve a real pain point.

This project did not start as a polished architecture. It started because I was tired of doing the same boring work over and over again.

First it was scraping. Then storing data. Then scheduling. Then scoring. Then queueing. Then drafting. Then tracking.

Each step came from friction I had already felt.

And that is probably why the system ended up being useful. It was built around a real problem, not around a demo idea.

If there is one sentence I would keep from this whole experience, it is this:

Automation helped me focus on better opportunities.

That was the point from the beginning, even before I knew what the final pipeline would look like.