Wondering if anyone has any insight about h.264 byte code stream:
The ffmpeg command line is:
fmpeg\" -s 320x240 -f avfoundation -r 30.00 -i \"0:none\" -c:v libx264 -preset ultrafast -tune zerolatency -x264opts crf=20:vbv-maxrate=3000:vbv-bufsize=100:intra-refresh=1:slice-max-size=1500:keyint=30:ref=1 -b:v 1000 -an -f mpegts -threads 8 -profile:v baseline -level 3.0 -pix_fmt yuv420p udp://127.0.0.1:5564"
In theory, the elementary stream in h.264 should be like this: (view image)
So the key is to generate individual NALUs from H.264 stream. So we should get the bitstream like this: (view image).
We need get the real NALU type like this: 0x1F
& NALU type. So 0x27
is equal to 0x67
.
Normally, we should just have these NALU type(after the operation of 0x1F
& NALU type):
1: slice of a non-IDR picture. (P frame)
5: slice of an IDR picture. (I frame)
6: Supplemental enhancement information. (SEI)
7: Sequence parameter set. (SPS parameter)
8: Picture parameter set. (PPS parameter)",
9: Access unit delimiter.
But what I get from udp is like this from the first UDP packet:
(source: artsmesh.io)
In this UDP datagram, something doesn’t make sense, after the 0x00000001
start code header, the NALU type is 0xff
, and the second one is 0xf0
, both of them are undefined in h.264.
So I’m having trouble finding out why the h.264 stream is not working.
And is it true that the start code header is always either four bytes 0x0000 0001
or three bytes 0x000001
within the same UDP packets(or the same session of streaming)?