# Use fragmented MP4

I’m not going into too much detail on fMP4, you can see more here.

But the basic gist is this: for most mp4s, the browser needs to download the entire file before it can begin playback, which means if it’s a long, big file, the user is going to be waiting for a while.

But there's a special form of MP4 called a "fragmented MP4" or "fMP4" which holds it's metadata at the beginning of the file instead of the end. You can create an fMP4 in ffmpeg with the following flag: `-movflags +faststart` or you can convert an already created MP4 using a utility like [QTIndexSwapper2](http://renaun.com/blog/code/qtindexswapper/).

A fragmented MP4 can start playback with just a fraction of its data, and continue loading as it plays. A much better user experience.

Even better is to use a streaming format, HLS or MPEG-DASH, which we’ll get to in a later best practice. But that’s much more complicated, and if you don’t want to go that far, just use fMP4.

#### Current Elm Difficulty = Very Easy

This is part of the encoding process, these just look like standard mp4s to Elm. The only thing is you should test to make sure that fMP4’s work on the browsers you need to support. I’ve never come across a browser that doesn’t it.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://danabrams.gitbook.io/av-best-practices/use-fragmented-mp4.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
