Featured image of post I Built an AI That Roasts Spotify Playlists With Malaysian Style

I Built an AI That Roasts Spotify Playlists With Malaysian Style

Jobless + bored + wanted to try AWS Bedrock = AI that roasts your Spotify playlist in Malaysian style. Built full-stack app from scratch, taught Claude3 Haiku how to speak Manglish by asking Grok to analyze Malaysian social media posts. Posted on IG, people actually tried it, got screenshots back.

Part 1: The Spark

I don’t remember exact details of that day. Maybe I was staring at my ceiling, scrolling LinkedIn, refreshing JobStreet every five minutes? Doesn’t matter. I wasβ€”and still amβ€”in that suspended state of having lost a job. What I DO remember vividly: I wanted to go crazy with my AWS free tier. So I think I did just thatβ€”roaming around the AWS dashboard, exploring services, when I saw it.

AWS Bedrock

So I was thinking, what if I could paste my Spotify playlist URL and have an AI analyze my music taste, then roast it? (I made it up because i dont remember how it this idea came) The funny thing is next day is 31 August 2025 so I was like

alright lets make the bedrock cooked my playlist but in mamak malaysian style (cringe, but you get what I wanted to say)

That’s how it started. Spotify API + AWS Bedrock + Malaysian cultural data = an AI that roasts playlists with authentic Malaysian humor.


Part 2: Teaching AI How Malaysians Actually Talk (The Grok Phase)

AI sure is smart, but it doesn’t naturally speak Malaysian English. I needed to teach it cultural context like slang, food references, local lifestyle.

So I asked Grok (the Twitter AI) to analyze how Malaysians post on social media. Like what’s the pattern? I picked Grok simply because Grok easily access tweets (Idc man i still called them Twitter until today)

What Grok got:

  1. Sentence particles are everything: “lah” for emphasis, “wor” for questions, “mah” for explanations

    • Example: “Why your playlist so mainstream wor?”
    • Without “wor”: “Why your playlist so mainstream?” β†’ sounds wrong
  2. Cultural references carry the joke:

    • Food: roti canai, nasi lemak, teh tarik, mamak
    • Places: KL traffic, Grab rides, pasar malam
    • Lifestyle: office work, Friday night plans, “aiyah” moments
  3. The roast angles:

    • Mainstream: “All Top 40 - your taste flatter than roti canai!”
    • No local artists: “Zero Malaysian songs - you forgot you’re Malaysian ah?”
    • Same artists: “Only listening to 3 artists - very satu hal lah!”

The prompt engineering:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
// This is what I used
const prompt = `
You are a Malaysian comedian roasting someone's Spotify playlist. 
Your tone should be playful, culturally authentic, and use Malaysian English.

**Use these elements:**
- Slang: lah, wah, aiyah, alamak, paiseh, wor, mah
- Cultural references: roti canai, nasi lemak, mamak, KL traffic, Grab
- Local artists: Yuna, Siti Nurhaliza, Faizal Tahir

**Roast these angles:**
- Mainstream taste (too popular)
- Lack of Malaysian artists
- Low diversity (same few artists)
- Obscure taste (too indie)

Keep it to 2-3 sentences maximum. Make it screenshot-worthy.

Playlist data: ${JSON.stringify(playlistAnalysis)}
`;

Part 3: The Statefulness Epiphany

I thought I understood data storage because I used localStorage and Vue’s ref(). Then I learned:

servers don’t remember anything

There was this weird issue I faced where I could roast a playlist but when I checked the public feed, same playlist kept showing up again.

I was thinking: Wait, didn’t I already roast this playlist? Why is it showing up again?

So I found that my POST worked (saves roast) but GET kept showing duplicates because:

AWS Lambda is STATELESS each function call might run on a different server instance

1
2
POST request β†’ Lambda Instance A β†’ Saves to memory in Instance A
GET request  β†’ Lambda Instance B β†’ Only has original data

The fix:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
-- βœ… RIGHT - Database remembers what backend forgets
CREATE TABLE roasts (
  id INT AUTO_INCREMENT PRIMARY KEY,
  playlist_spotify_id VARCHAR(255) NOT NULL UNIQUE,  -- Prevent duplicates!
  playlist_name VARCHAR(255),
  playlist_owner VARCHAR(255),
  roast_text TEXT NOT NULL,
  popularity_score INT,
  local_artist_count INT,
  created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  INDEX idx_created_at (created_at)
);

CREATE TABLE playlist_metadata (
  id INT AUTO_INCREMENT PRIMARY KEY,
  playlist_spotify_id VARCHAR(255) NOT NULL UNIQUE,
  first_roasted_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  last_roasted_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

Again we have to remmember that backend doesn’t remember anything.

Databases remember everything.


Part 4: Architecture - How This Thing Actually Connects

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                Frontend (Nuxt 3)                β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚ Playlist β”‚  β”‚  Roast   β”‚  β”‚  Public  β”‚ β”‚
β”‚  β”‚  Input   β”‚  β”‚  Display  β”‚  β”‚  Feed    β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
                      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚          AWS API Gateway                     β”‚
β”‚       (Routes HTTP β†’ Lambda)                    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
                      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚      Lambda Function (generateRoast)            β”‚
β”‚  1. Check rate limit (DynamoDB)                  β”‚
β”‚  2. Check duplicates (RDS MySQL)               β”‚
β”‚  3. Fetch playlist (Spotify API)                 β”‚
β”‚  4. Analyze playlist                            β”‚
β”‚  5. Generate roast (Bedrock + fallback)      β”‚
β”‚  6. Store roast (RDS MySQL)                    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

What this actually means:

  • Frontend: Just renders stuff, handles user interaction
  • API Gateway: The bouncer that routes requests to Lambda
  • DynamoDB: Rate limiting - prevents spam (10 requests/day per IP)
  • RDS MySQL: Stores roasts, remembers what we already roasted

Part 5: The Race Condition Problem (Or How I Documented a Bug I Haven’t Fixed Yet and never fix it)

I didn’t want people spamming my app. Imagine someone testing their 50 different playlists just for fun. So I set up rate limiting: 10 requests per day per IP address.

The idea was simple enough. Check how many times this IP has requested in the last 24 hours. If it’s less than 10, let them through. If not, block them.

What could go wrong? But my code is a problem and thankfully im not influencer enough to get this problem ! :D

What I saw: Rate limiting sometimes allows 11 or 12 requests instead of 10

What I actually coded:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
// ❌ Race condition bug
// From: backend/src/services/dynamodb/rateLimitService.ts

async checkDailyLimit(clientIp: string): Promise<IRateLimitResult> {
  // STEP 1: Query current count
  const twentyFourHoursAgo = Date.now() - 24 * 60 * 60 * 1000;
  
  const command = new QueryCommand({
    TableName: 'daily-roast-limits',
    KeyConditionExpression: 'ip_address = :ip AND request_timestamp > :timestamp',
    ExpressionAttributeValues: {
      ':ip': clientIp,
      ':timestamp': twentyFourHoursAgo,
    },
  });
  
  const result = await dynamoDocClient.send(command);
  const requestCount = result.Items?.length || 0;
  
  // STEP 2: Check if under limit
  const allowed = requestCount < 10;
  return { allowed, remaining: Math.max(0, 10 - requestCount) };
}

// STEP 3: Increment happens LATER (in separate call!)
async incrementUsage(clientIp: string): Promise<void> {
  const entry: IRateLimitEntry = {
    ip_address: clientIp,
    request_timestamp: Date.now(),
    expiration_time: Date.now() + 24 * 60 * 60 * 1000,
  };
  
  await dynamoDocClient.send(new PutCommand({
    TableName: 'daily-roast-limits',
    Item: entry,
  }));
}

What actually happens with concurrent requests:

1
2
3
4
5
6
T=0ms:   Request A arrives β†’ Query β†’ count = 9 β†’ ALLOWED
T=5ms:   Request B arrives β†’ Query β†’ count = 9 (sees same 9!)
T=10ms:  Request A: Proceeds to process roast
T=12ms:  Request B: Proceeds to process roast
T=2000ms: Request A: Increment β†’ count = 10
T=2005ms: Request B: Increment β†’ count = 11 ❌ EXCEEDED!

Status: This is a KNOWN BUG in production. I documented it as a TODO but haven’t fixed it yet and I don’t want to fix it.

How This Could Be Fixed (Requires Data Model Redesign):

The documented solution would require changing from my current “one-item-per-request” model to a “counter-per-IP” model:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
// Current: Each request is a separate DynamoDB item
// Proposed: One item per IP with an atomic counter

await dynamoDocClient.send(new UpdateCommand({
  Key: { ip_address: clientIp },
  UpdateExpression: 'ADD request_count :inc',
  ConditionExpression: 'request_count < :limit',
  ExpressionAttributeValues: {
    ':inc': 1,
    ':limit': 10
  }
}));

This is more than just adding a ConditionExpression - it requires:

  1. Schema migration (item-per-request β†’ counter-per-IP)
  2. Handling counter resets (24-hour windows)
  3. TTL strategy adjustment

Why my current “one-item-per-request” model can’t easily use atomic operations:

  • Query counts items (not atomic)
  • Put creates new items (not updating a counter)
  • No single field to conditionally check

Why I Haven’t Fixed It Yet:

  • This project is just for fun
  • I dont have mood to do it
  • I have backlog anime/movies/games to finish
  • Looking for job aggressively

All these are majority from AI suggestions but yeah I kind of get it how i should do it,


Part 7: People Actually Tried It

After deploying the app, I posted it on Instagram stories. Just a quick “built this thing that roasts your Spotify playlist” with a link. Thought maybe a few friends would try it for the memes. They don’t just scrolling past, but actually pasting their Spotify URLs !!!!!!!!!!

they voted lol friend shared they tried

Real users, not just me testing my own code. That’s what jobless projects don’t usually get. Its fun to see the results tbh

Here are some photos when i tried testing on Bruno too before deploy, honestly its very fun thing to do.


Part 8: What I Haven’t Fixed Yet

Why do you have to know? Real projects aren’t perfect. Honest about what works, what’s broken, and what you’d do differently. That’s more valuable than fairy tales about flawless code.

Known Issues

Issue 1: Rate Limiting Race Condition ❌

Status: Known bug, documented as TODO, NOT fixed

The Problem:

  • Current code uses check-then-increment pattern (yapped about it at Part 5)
  • Two concurrent requests can both pass check (both see count=9)
  • Result: User makes 11 requests instead of 10 limit

Why I Haven’t Fixed It:

  • I don’t want to
  • Not pay my bills

Future Improvements

Improvement 1: Pagination Performance ⚠️

This thing sure it works, could be better at scale

Current Implementation: LIMIT/OFFSET approach

  • Works fine for <10,000 records
  • Gets slow at page 100+ (OFFSET 1000+)

Why It’s Fine Now:

  • Public feed has ~5+ roasts
  • Nobody clicking past pagination
  • Fix not needed until dataset grows

Improvement 2: Duplicate Detection Accuracy ⚠️

Status: Works for most cases

Current Implementation: Checks by playlist_spotify_id

  • Prevents re-roasting same playlist
  • Works perfectly for renamed playlists

Limitation: Same content, different playlist = new roast

  • Example: 50 identical songs, different names
  • Rare edge case

Why It’s Acceptable Now:

  • Playlist ID is unique and permanent
  • Most users won’t have duplicate content across playlists

Final Thoughts

Did this get me a job? I wish man lol. But hey on the bright side, joblessness built something good xuz I am having time to actually finish a project for once

Did this get me something better? Yeah - proof that I can:

  • Build full-stack from scratch (not just frontend anymore)
  • Integrate AI with actual business value
  • Ship production-ready code (not just another abandoned project)
  • Build something people actually use and share

What’s next? Maybe I’ll add user accounts so people can save their roasts. Maybe I’ll add comparison mode to roast two playlists at once. Or maybe I’ll just apply to jobs again tomorrow.

Built with Hugo
Theme Stack designed by Jimmy