Evolution Host Logo

Evolution Host
Invent the Future

FiveM Server Optimization

Reduce lag spikes, improve server tick stability and support larger player counts lag free. This guide focuses on real-world FXServer tuning, script (resource) optimization, and monitoring.

Reading time ~12 minutes
Last updated
Category FiveM Performance
FXServer tuning Resource profiling Network stability OneSync configuration

Table of contents

  1. What “lag” means in FiveM
  2. Key metrics to watch
  3. Quick wins (5–10 minutes)
  4. Resource optimization (biggest wins)
  5. Asset Caching
  6. server.cfg best practices
  7. OneSync & entity streaming
  8. Network & routing tweaks
  9. Client assets, loading, and FPS
  10. How to use the FiveM resource profiler
  11. Monitoring & maintenance routine
  12. Hosting & Server specs
  13. FAQ

FiveM performance issues usually come from three places: (1) CPU-heavy resources, (2) entity/streaming overload, and (3) unstable networking or hitches. The fastest path to smooth gameplay is measuring, fixing the worst offenders and repeating.

Rule of thumb: optimize server CPU first (scripts/resources), then entity streaming, then client assets. Server configuration and hardware changes often yield minimal gains for single-threaded performance. Profiling should be prioritised for significant improvements.

1) What “lag” means in FiveM

Players call all sorts of issues “lag,” but there are different root causes:

  • Thread hitch warnings: thread loop delays, causing lag and timeouts when severe.
  • High script CPU: resources consuming too much CPU time.
  • Entity overload: too many peds/vehicles/objects or frequent entity updates.
  • Network issues: packet loss, poor routing, or overloaded bandwidth.
  • Client FPS drops: huge MLOs (multi level objects), unoptimized maps, or heavy textures/models.

Important: Thread hitch warnings and other software level performance issues can cause increased pings between the FiveM game client and server software, also resulting in player timeouts. This can often be incorrectly attributed to DDoS attacks or network problems. We recommend monitoring resource usage and network activity to differentiate between underlying causes.

DDoS attacks are usually accompanied by an increase in inbound traffic. Software (script/resource) issues are not, but may coincide with an increase in outbound traffic (flooding events or state bag updates to the players). Network thread hitch warnings are often not DDoS attack related.

2) Key metrics to watch

Track these consistently so you know what actually improved:

Thread hitch warnings

Look for spikes that correlate with lag complaints or player peaks.

Resource CPU time

Identify the top 3 resources by CPU and fix those first.

Entity counts & streaming

Too many entities in a hot area = stutter and desync.

Network quality

Packet loss and bad routes can mimic “server lag.”

Quick operational checklist

  • Measure at low load and peak load (results differ).
  • Change one thing at a time; keep notes.
  • Re-test after each resource update or big content drop.

3) Quick wins (5–10 minutes)

If you only have a few minutes, do these steps first. They don’t require rewriting your framework - they just make it easier to see what’s causing hitches and rule out common issues.

1) Turn hitch warnings into a signal

Hitch warnings are the fastest way to confirm “server lag” vs. network/FPS complaints. Set a threshold you care about, then correlate spikes with resource CPU or zone activity.

# server.cfg (example)
# Choose a hitch threshold that matches your tolerance (lower = more sensitive).
# Then watch logs/console during peak and fix the offenders.
set sv_mainThreadHitchWarning "350"

Tip: don’t chase a single spike. Look for repeat offenders that line up with player complaints or peak hours.

2) Profile at peak, not at 5 players

Most performance issues only show up under load. Run the FiveM profiler during your busiest window, identify the top CPU resources, and fix the top 1–3 first.

Aim to profile the server when the thread hitch warnings are being printed to the console. The only way to identify the cause of the thread hitch warnings is for the profiler capture to be collected during the thread hitch warning - start before, measure during and end after.

This is where it pays off: baseline → top offenders → one change → re-test.

3) Kill tick spam

Common CPU time consumers include unnecessary per-tick work: tight loops, global scans and frequent DB writes. Convert polling to events, increase waits where possible, and debounce state saves.

  • Replace “scan all players/entities” with zones/grids.
  • Debounce DB updates (batch changes instead of writing constantly).
  • Stop updating state bags / shared state every frame.

4) Add a basic cleanup routine

Long uptimes degrade when entities accumulate. A simple cleanup policy prevents progressive performance decay.

  • Delete abandoned vehicles/props after a timeout.
  • Cap AI/spawn density in hot zones.
  • Clean up event props after server-wide scenes.
Recommended workflow: set a hitch warning threshold → profile at peak → fix top 1–3 resources → re-test at peak. If you can’t measure it, you can’t reliably improve it.

4) Resource optimization (biggest wins)

Resources are the #1 cause of server hitches. Fixing a single bad loop can outperform any hardware upgrade.

Audit your worst offenders

  • Identify scripts that run every tick unnecessarily.
  • Reduce polling: replace tight loops with events/callbacks where possible.
  • Batch work: do less per tick, spread heavy operations over time.
  • Cache database results and avoid repeating the same queries.
Common mistake: “while true do Wait(0)” on the server for logic that could run every 250–1000ms, or only when a player is nearby / state changes.

Database and I/O hygiene

  • Index SQL relational database table columns used for frequent lookups (inventory, users, vehicles, properties) to avoid full table scans.
  • Prefer prepared statements and pooled connections.
  • Avoid writing to DB every small state change; batch or debounce updates.

Reduce server-side work per player

  • Scope proximity checks: grid/zone-based checks instead of scanning all players.
  • Limit expensive “global” operations (e.g., iterating all vehicles every tick).
  • Use state bags / shared state carefully - don’t spam updates.

5) Caching FiveM client assets (skins, NUI, and resources)

When people talk about “caching” in FiveM, they’re usually referring to client asset caching - not gameplay traffic. This applies to downloadable files like skins, clothing, vehicle textures, maps, and NUI assets that players have to download when joining your server.

To achieve this, players download the assets from a caching reverse proxy such as an Nginx web server or CDN service instead of fetching them directly from the server. Our FiveM DDoS Protection service includes a download cache as standard with all plans.

Important clarification
You are not caching live game traffic. Gameplay networking, ticks, and entity sync are never cached. Only static, downloadable assets are cached.

What assets can be cached?

  • Clothing and ped textures (EUP, custom skins, uniforms).
  • Vehicle models and textures.
  • MLO and map assets.
  • NUI assets (HTML, CSS, JavaScript, images).
  • Other static resource files required on join.

Why asset caching helps

  • Faster first-time joins for new players.
  • More reliable downloads for large asset packs.
  • Less bandwidth pressure on your game server.
  • Better performance during peak join waves.
What players feel
Asset caching improves loading times and join stability, not server tick rate or in-game FPS. It makes the server feel more professional and reliable.

Best practices for FiveM asset caching

  • Use proper cache headers for static assets.
  • Version assets when updating (avoid breaking player caches).
  • Compress text-based files (JS, CSS, JSON, SVG).
  • Host large asset sets separately from core server logic.
  • Use geographically distributed delivery if your player base is global.
Common mistake
Asset caching will not fix rubber-banding, hitch warnings, or script-related lag. Always profile server performance separately.

6) server.cfg best practices

Your server.cfg should be clean, consistent, and built for stability. Treat it like production infrastructure.

Operational best practices

  • Keep artifacts updated (but don’t update blindly - test on staging first).
  • Remove unused resources and old dependencies.
  • Organize resources by category and load order.
  • Separate secrets (keys) from the public config where possible.
# Example: tidy load order (conceptual)
ensure core
ensure chat
ensure framework
ensure database
ensure jobs
ensure housing
ensure vehicles
ensure ui
ensure maps
ensure standalone
Avoid: loading 200+ resources “because maybe we’ll use them later.” Every extra resource increases conflict risk and debugging time.

7) OneSync & entity streaming

High player counts and busy city areas can overload entity updates. The fix is usually controlling how many entities exist and how often they change.

Entity discipline

  • Delete unused entities (abandoned vehicles, props left behind).
  • Limit AI density and scripted pedestrians.
  • Avoid spawning large object sets for every player.
  • Prefer instanced interiors or limited-access zones for heavy scenes.

Streaming hot zones

If a single location always lags (e.g., Legion Square during events), treat it as a performance budget problem: reduce props, cut AI, simplify MLOs, and avoid scripts that tick faster in that area.

Practical strategy: add “cleanup routines” for vehicles/objects, and cap spawn counts per zone. It prevents slow degradation over long uptimes.

8) Network & routing tweaks

Network issues can look like server lag. The goal is stable routing, low packet loss, and enough headroom.

What to do

  • Host close to your player base region (latency matters).
  • Use a quality network provider and monitor ICMP packet loss at the network level.
  • Keep bandwidth headroom for peak times (events, updates, big joins).
  • Use FiveM DDoS protection that doesn’t introduce unstable latency spikes.

Watch out: Some script/resource issues can look like a DDoS attack or networking issue (and vice versa). Profiling, traffic inspection and network level tests (ping, MTR) helps confirm which one you’re facing.

Script issues can cause packet loss at the software level because the client and/or server can't process traffic fast enough. Measure ICMP packet loss at the network level with ping and MTR tests to differentiate between underlying causes.

If there isn't any packet loss on the network, but the FiveM software reports packet loss, you know it's a software level issue due to high script CPU usage, hang/freeze issues or similar.

9) Client assets, loading, and FPS

Even if your server is perfect, heavy client assets can tank FPS and feel like lag. Optimize what players stream and render.

Asset optimization tips

  • Compress textures and avoid excessively high resolutions for small objects.
  • Reduce draw calls by simplifying overly complex props.
  • Prefer fewer, optimized MLOs over many unoptimized ones.
  • Audit NUI (UI) assets: optimize images, avoid excessive JS loops.
Easy win: keep your “base install” lightweight, then add optional content gradually. Players judge your server in the first 5–10 minutes.

10) How to use the FiveM resource profiler

The fastest way to eliminate hitches is to identify which resource is consuming the most CPU time. Profiling turns “it feels laggy” into a ranked list of problems you can fix.

The FiveM profiler is a powerful tool to help you identify problematic resources or scripts.

Step 1: Establish a baseline at peak

  • Profile during your busiest hours (or reproduce load with staff testing).
  • Write down your current “feel”: rubber-banding? delayed events? only in specific zones?
  • Note player count and what major activities were happening (jobs, heists, events, etc.).

Step 2: Find the top CPU resources

Look for resources that consistently sit at the top or spike during lag reports. Fix those first before touching anything else.

What “bad” looks like
A resource that ticks constantly (or spikes) and correlates with hitch warnings is your primary suspect. Optimize loops, reduce polling, and remove expensive per-player scans.

Step 3: Optimize with a “one change” workflow

  • Make one change (e.g., reduce a loop frequency, add caching, debounce DB writes).
  • Restart, re-test under similar conditions, and compare results.
  • Keep a simple changelog so you can revert quickly if needed.

Common fixes that move the needle

  • Replace tight server loops with events/callbacks.
  • Reduce frequency of proximity checks (use zones/grids, not global scans).
  • Cache expensive computations and DB reads.
  • Batch updates instead of writing every small state change.
Avoid false wins
Don’t profile at 5 players and assume it scales. Re-check at peak, because hotspots often appear only with load.

11) Monitoring & maintenance routine

Optimization is a process. A stable routine beats “random tuning” every time.

Weekly profiling

Record top CPU resources, hitch warnings, and player peak behavior.

Fix top 1–3 issues

Script loops, entity spam, and DB hotspots usually dominate.

Deploy safely

Test on staging, then roll out with a clear changelog.

Validate at peak

Measure during your busiest hours (not at 3am).

Uptime note: Long uptimes are fine if you clean up entities, rotate logs, and keep memory usage healthy. If performance degrades over time, you likely have accumulating entities or state spam.

12) Hosting & Server specs

Optimization fixes what you control (scripts, entities, assets). Hardware, network and I/O performance determine your performance ceiling. A faster or larger CPU won’t fix inefficient code, but it does give you more headroom at peak if the issue isn't an infinite loop (which will always consume all available CPU time, perhaps on a single thread).

Rule of thumb: optimize first, then scale hardware when profiling shows you’re CPU-bound, memory-bound or network-limited without an optimisation path.

What matters most

  • CPU (single-core): higher clock + strong IPC reduces thread hitches.
  • RAM headroom: prevents swap and keeps spikes from turning into stalls.
  • NVMe storage: helps DB + asset I/O and reduces “slow join” pressure.
  • Network quality: good routing + low packet loss avoids “fake lag” reports.

If you don't want to worry about hardware limitations getting in the way, be sure to choose a reliable FiveM hosting provider.

Common “lag” symptoms & what they usually mean

Symptom Often points to
Hitch warnings / rubber-banding at peak CPU-bound (scripts, main thread, entity load)
Stutters after long uptime Entity buildup / memory leaks / state spam
Players complain “lag” but no hitches Routing/packet loss or client FPS drops
Slow joins / large download stalls Asset delivery / storage I/O / bandwidth headroom

Tip: if your profiler shows stable resource CPU but players still “lag”, check for ICMP packet loss and DDoS attacks on the network before changing configurations.

When upgrading helps (and when it won’t)

  • Helps when profiling shows consistent CPU saturation, swap, or bandwidth congestion.
  • Won’t help if one resource is inefficient (tight loops, global scans, excessive entity updates).
  • Best approach: fix top offenders → re-test at peak → upgrade only if you’re still bound.
Important: “More cores” doesn’t automatically fix FiveM hitching. Strong single-core performance and clean scripting usually matter more.

Quick spec checklist (practical)

  • Prefer fast CPU cores over high core count.
  • Keep RAM usage comfortably below the limit (avoid swapping).
  • Use NVMe SSDs and avoid overloaded shared storage.
  • Host near your player base (latency + routing).
  • Leave bandwidth headroom for peak joins and updates.

FAQ

What’s the #1 optimization for a laggy FiveM server?

Profile your resources and fix the highest CPU scripts first. One bad loop can create hitches that everyone feels.

Task Manager shows low CPU usage. Why do I have thread hitch warnings and lag?

The Windows Task Manager displays all CPU usage values as a percentage of the total CPU time available on your machine across all CPU cores.

A single thread can only execute instructions on one logical CPU core at a time. A thread will slow down and cause hitch warnings when the CPU time available to a single thread (running on a single CPU core) is exhausted. The hitch warning indicates the (real world) time taken for a single tick to complete on the thread.

The per-thread CPU usage can be 100% when your server's overall/total CPU usage in Task Manager is only 6.25% (for example if your server has 16 logical CPU cores and there is 1 active thread executing an infinite loop).

If a thread can't process instructions fast enough and there are thread hitch warnings, lag is often experienced in-game.

Thread hitch warnings without high single threaded CPU usage can indicate a hang/freeze, e.g. if the server is waiting for an I/O task such as a file read/write, database operation or network task to complete. In some cases networking issues can cause this.

Our FiveM DDoS Protection service includes in-depth network and server performance monitoring, with charts of single-threaded CPU usage, software level pings and network activity available to provide invaluable insights into the server and network performance.
Should I remove resources even if they “seem fine”?

Yes, if they’re unused. Extra resources increase conflicts, memory usage, and debugging time. Keep production lean.

What should I set sv_mainThreadHitchWarning to?

Pick a threshold (in ms) that matches your tolerance for noticeable stutters, then tune it based on real peak-hour behavior. Use it as an alert signal, then confirm the cause with profiling - don’t guess.

Why do players lag only in one area of the map?

That’s usually a streaming and entity density problem (heavy MLOs, too many props/vehicles/peds, or scripts that tick harder there). Treat that zone as a performance budget.

How do I optimize for higher player counts?

Strong single-core CPU performance matters a lot, but you also need script discipline: fewer per-player loops, smarter proximity checks, and strict entity cleanup.

Does asset caching improve in-game FPS?

No. Asset caching improves loading times and join reliability, not in-game FPS. FPS issues are usually caused by heavy client assets, unoptimized maps, or scripts running too frequently.

How often should I profile my FiveM server?

At minimum, profile weekly and after any major update. You should also profile during peak player hours, because many performance issues only appear under load.

Why does my server lag only at higher player counts?

Many scripts scale poorly with player count. Loops that scan all players or entities can become exponentially more expensive, causing hitches that don’t appear at low population.

Are restarts required for good FiveM performance?

Restarts aren’t inherently bad, but needing frequent restarts usually indicates entity buildup, memory leaks, or scripts that don’t clean up after themselves. A well-optimized server can run long uptimes without degradation.

Will upgrading hardware always fix FiveM lag?

No. Hardware helps, especially strong single-core CPU performance, but poorly optimized scripts can overwhelm even powerful servers. Profiling and script optimization usually provide the biggest gains.

What’s the biggest mistake server owners make when optimizing?

Changing many things at once without measuring results. Optimization should be iterative: profile, change one thing, re-test, and confirm the improvement before moving on.

Want smoother performance without the hassle?

Run your FiveM server on fast CPUs, low-latency networks, and infrastructure designed for stable ticks and peak-hour loads.

DDoS protected NVMe storage Low latency 24/7 support