A rant on the difficulties of downloading an ongoing stream from YouTube with yt-dlp.
Published: May 2026
In this post I go through the difficulties I have had when downloading ongoing streams from YouTube using yt-dlp. yt-dlp is a big project with a lot of switches and features and, while I have not explored every single feature and option it has to offer, it is quite bad that something as simple as downloading an ongoing stream is such a hard problem using the go-to download everything from YouTube downloader. In the end I present my workaround which is to have multiple processes of yt-dlp in parallell downloading the stream for redundancy. I wrote a simple orchestrator in C/C++ that automates this.
This is the no-brain starter method but it is not a great solution. The problem with this is that this only downloads the stream from the current position. How are you going to download the beginning of the stream if it started hours ago?
This will download the stream from the beginning up to to the current fragment and will even continue until the end of stream. But what will you do when a fragment 4321 of 5000+ fails to download? Several times I have encountered fragments that failed to download the first time (failing all default retries) but was able to be downloaded after the download was restarted. The problem here is that the entire video has to be re-downloaded in this case. What if it is fragment 4322 that has a problem in the re-download? Am I supposed to restart the download from the beginning again?
Using this option it will keep the fragments downloaded as separate files and it will skip downloading fragments that have been previously downloaded if the process is restarted. But there are several problems. If the download is interrupted before the end of the stream it will merge all fragments into an incomplete video. At this point the download should be restarted as soon as possible, it should not spend a long time (and a lot of storage space) building an incomplete video. Furthermore, the merged video will (if combined with --live-from-start) prevent yt-dlp from downloading the stream again because yt-dlp will think that it has already downloaded it. The user is still required to babysit the download and to interrupt it if any fragments fail to download.
This will abort (i.e. exit the download) instead of continuing if it fails to download a fragment. It is sometimes possible to restart the download (why? when? how? I have no idea). While using this you still have to be actively monitoring the download because if a fragment fails then it obviously won't download the rest of the stream and the download has to be restarted manually.
Using --abort-on-unavailable-fragments it will stop the download when a fragment is unavailable. This, in combination with --keep-fragments, might be good for easier restarts (still manual restarts though). This combination I have not been able to test thoroughly. If a fragment is irrevocably lost will this actually manage to download the rest of the stream?
I haven't tested this either but I would assume that this will hammer on fragment 4321 infinity times. This is good if the fragment become available in the future but some fragments are irrevocably lost and will never become available. Will this option then prevent the download of the remainder of the stream?
This is obviously also not sufficient. The unavailable fragments could be unavailable during all retries and only become available after all retries have been exhausted. And if yt-dlp spends too much time waiting for missing fragments it will miss the remainder of the stream.
This does not currently supported for streams on YouTube. Why is this not possible? Who knows... This would be helpful if a fragment failed and it is known approximately where, in time, that fragment is.
This option has to be mentioned because of its usefulness though it does not help downloading fragments directly. When a download has ended yt-dlp spends a lot of time (and space) to fix the stream. I do not know what this actually does but it certainly takes a lot of time and space. The entire stream is written to disk again meaning yt-dlp requires double the stream's size for the stream. --fixup never can be used to quickly stop after the download and to quickly restart it. Since it is a post-processing of the video it is fine to skip it for the moment. I wish the documentation was more clear of what it does and how it can be replicated after the download has finished.
I want to tell yt-dlp to just download the stream! Why is it so hard!? When downloading a stream could you please try repeatedly to download the fragments that have failed but without preventing the download of the remainder of the stream? Stop creating incomplete and corrupt files by default!
There is no good solution to this problem where fragments are downloaded sequentially. The solution must be able to fetch fragments non-sequentially and these fragments are retried infinitely without stopping to download the new fragments. This has a simple solution though. Just put the failed fragments in an array and retry them every now and then.
yt-dlp would need to change their insane default behaviour and policy that it is better to have an corrupt final file than no file at all. This is a false dichotomy. Once the file has been finalized it is much harder to fix it. This is made even worse when yt-dlp corrupts the file by default often without the knowledge of the user.
Some features that would help.
Other users have been complaining for years for similar reasons.
To workaround these problems I have implemented a simple orchestrator that starts and maintains multiple processes of yt-dlp that redundantly download the stream. These processes are started using --abort-on-unavailable and, if a yt-dlp process exits, the orchestrator will waitpid(2) on it and start a new worker. As it automatically restart workers the user is no longer required to babysit the download. Through careful selection of flags it can even start download a new video (with different id) should the streamer stop the stream and restart it on their end.
This is obviously not a great solution. The underlying assumption is that if one process encounters a problem downloading a fragment the other processes might be spared. The solution requires a lot of extra storage and a lot of extra network bandwidth. It might not even work that well if the stream encounters a problem though at least it will restart the download should it encounter a problem.
For more information about the orchestrator see its separate article: Livestream - README.