WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

681

I already wrote a small program that takes a live rendered 3d cube with opengl in c++, pipes the rgb data of that into ffmpeg, ffmpeg writes and constantly updates the m3u8 and the ts files for HLS (to a ram drive mount), then a rsync command under watch to constantly upload them to a public html so it can be accessed as a live video stream of the local cube rendering.

This got me the interview "however" I have never done anything with docker aside from run their 'hello world' program. I got Docker running on the machine after some kernel recompiling. Does anyone have any suggestions on what I should practice / tutorialize with docker before the interview?

Thanks!!

Edit: 10,000%'d it. Might be writing ya'lls weather channel streaming code soon.

I already wrote a small program that takes a live rendered 3d cube with opengl in c++, pipes the rgb data of that into ffmpeg, ffmpeg writes and constantly updates the m3u8 and the ts files for HLS (to a ram drive mount), then a rsync command under watch to constantly upload them to a public html so it can be accessed as a live video stream of the local cube rendering. This got me the interview "however" I have never done anything with docker aside from run their 'hello world' program. I got Docker running on the machine after some kernel recompiling. Does anyone have any suggestions on what I should practice / tutorialize with docker before the interview? Thanks!! Edit: 10,000%'d it. Might be writing ya'lls weather channel streaming code soon.

(post is archived)

[–] 3 pts

is this docker specific, or can you toss in K3s/K8s to mix it up? Perhaps also mention container hardening, API security and cloud spin-ups? Maybe some IaC will go a long way since IaC and virtualization (like edge computing/CDN) is the next-level goal for many org's moving from a capex model to opex, depending on their management model

[–] 2 pts

My guess is it would use k8s because to me it doesn't seem to make much sense to just use single dockers? So I guess they want some cloud scalable purpose with doing HLS. Was thinking / pysching out if it was to scale up the http distribution or to parallelize the encoding or both.

[–] 2 pts

then i would probably mention that your efforts could be transferable into a K8s cluster/model and managed that way as well as docker. Can you run this using an AWS VPC - I would mention this if you could since the environment is already there. and since it's a streaming service I would also include the ability to pipe to edge devices (CDN) for best effort distribution.