C++文件输入输出,看这一篇就够了_Jasmine-Lily的博客-CSDN博客_c++文件输入输出
C++ STL标准库: std::list使用介绍、用法详解_超级大洋葱806的博客-CSDN博客_std::list 取值
1.得到当前CU的区域信息
(1)这里的类是UnitArea类,存储的是区域所有分量的位置信息
const UnitArea &area = cu;
cu包含了很多通用数据,所以直接用对应类建立变量,然后等于CU即可将数据传输过去。
(2)这里的类是CompArea类,存储的是区域选中分量的位置信息
const CompArea &area = cu.blocks[COMPONENT_Y];
area所属类的不同,后面得到getPredBuf()也不同
2.将预测信息存入
- //测试存储预测缓存
- PelBuf piPredtest = cs.getPredBuf (area);
- AreaBuf
& recYpoint = piPredtest; - std::ofstream out("C:\\Users\\qjjt\\Desktop\\输出.txt");
- for(int i = 0;i < widthtest;i++)
- {
- for(int j = 0;j < heighttest;j++)
- {
- out<<"预测值 " << "宽=" << i << "高=" << j << " " << recYpoint.at(i,j) << std::endl;
- }
- }
这个有问题,这只能得到当前测试CU的每个像素点的预测值,而且当前测试CU并不是最终划分方式确定的CU,会得到很多无效的预测信息。
这里的BestCs里保存的是最优划分,直接可用。所以就选择放在这里了
- //坐标
- int posx = area.lumaPos().x;
- int posy = area.lumaPos().y;
- //CTU的宽高
- const uint32_t widthtest = partitioner.currArea().lwidth();
- const uint32_t heighttest = partitioner.currArea().lheight();
- //亮度分量的区域信息
- const CompArea &_area = area.blocks[COMPONENT_Y];
C++中如何进行txt文件的读入和写入_Williamhzw的博客-CSDN博客
- PelBuf piPredtest = bestCS->getPredBuf (_area);
- AreaBuf
& recYpoint = piPredtest; - std::ofstream out("C:\\Users\\qjjt\\Desktop\\输出.txt",std::ios::app);
- out << "当前CTU坐标与位置" << " X=" << posx << " Y=" << posy << " 宽=" << widthtest << " 高=" << heighttest << std::endl ;
- for(int i = 0;i < widthtest;i++)
- {
- for(int j = 0;j < heighttest;j++)
- {
- out<<"预测值 " << "x=" << i << "y=" << j << " " << recYpoint.at(i,j) << std::endl;
- }
- }
思路:
(1)首先新建 piPredtest来接收BestCS最优划分的预测值缓存。这个变量是PelBuf结构体创建的对象,主要是看当前area是区域信息还是区域中一个分量的信息,再来决定用PelBuf还是PelUnitBuf。可参考下面这个图,这几个函数专门用来存储缓存

(2) 然后新建一个recYpoint来接收piPredtest中的信息,其实好像不用这一步,因为PelBuf就等于typedef AreaBuf
(3)接下来就是结果输出到文件中的一些操作,可参考上面博客。
std::ios::app
是为了在每次写入数据到文件中时,不覆盖已有数据。不然每次新运行就只会写入当前CTU的数据
(4)最后就是CTU中每个像素点都得到其预测值,AreaBuf
【1】一开始预测值一直输出为0,所以直接查看bestCS。(在运行中查看自动窗口)
bestCS->m_pred(这里面buf为-12581是因为我已经修改过了,原本为0),可看出缓存并没有传到这个最优bestCS里。所以如何得到预测缓存?

(1)首先我查看getPredBuf的全部引用,到xCompressCU函数的第1001行找到了
bestCS->picture->getPredBuf(currCsArea).copyFrom(bestCS->getPredBuf(currCsArea));
然后做了测试
//测试
PelUnitBuf piPredtest = bestCS->getPredBuf (currCsArea);
PelUnitBuf piPredtest2 = bestCS->picture->getPredBuf(currCsArea);
其中的buf都有值,说明在确定最优划分时预测值缓存确实已传输,但最终没到 最优划分的bestCS中
(2)然后继续查看getPredBuf的全部引用和调用层次结构,再根据之前写的图块划分结果显示,发现重建值的缓存可以调用,于是看到了其中的useSubStructure()函数并查看所有调用。发现这个函数在xCheckModeSplit()函数的1235行,cpyReco默认为true。而cpyPred由KEEP_PRED_AND_RESI_SIGNALS这个参数来决定。再根据以前写的关于useSubStructure()函数的博客,这个函数主要就跟缓存的传输相关,于是把KEEP_PRED_AND_RESI_SIGNALS这个参数从默认的0改为1.最终实现了预测值的显示
VVC中图块划分结果在图像上显示(中间有一段没写完)_青椒鸡汤的博客-CSDN博客
这里useSubStructure()的具体作用应该再看一下,还有rd选择的两个函数也需要看完。放在后面,注意尽快写完。搞清楚bestCS->picture->getPredBuf(currCsArea)是传输到哪里去了
xCheckModeSplit()函数中的RDcost(还没写,6.11写)_青椒鸡汤的博客-CSDN博客
xCheckBestMode()和useModeResult()函数解析 (未写完,6.11写)_青椒鸡汤的博客-CSDN博客
因为存放预测和残差的缓存已设为1,所以应该也可以在compressGOP函数直接picture调用
参考写法,代码中有这样的写法可以参考
const CPelBuf picOrig = pcPic->getOrigBuf (pcPic->block (compID));
注意不能写成 pcPic->getPredBuf(),里面要有区域信息
放在compressSlice()函数下面
- const CompArea &area = pcPic->Y();//亮度分量区域
- m_pcSliceEncoder->precompressSlice( pcPic );
- m_pcSliceEncoder->compressSlice ( pcPic, false, false );
-
- //测试
-
- PelBuf &piPredtest = pcPic->getPredBuf(area);//area为
- AreaBuf
& predYpoint = piPredtest; - std::ofstream out1("C:\\Users\\qjjt\\Desktop\\预测输出.txt");
- out1 << "当前帧" << " 宽=" << picWidth << " 高=" << picHeight << std::endl ;//注意输出文件的out分别命名
- for(int i = 0;i < picWidth;i++)
- {
- for(int j = 0;j < picHeight;j++)
- {
- out1 <<"预测值 " << "x=" << i << "y=" << j << " " << predYpoint.at(i,j) << std::endl;
- }
- }
-
- PelBuf &piResitest = pcPic->getResiBuf(area);
- AreaBuf
& resiYpoint = piResitest; - std::ofstream out2("C:\\Users\\qjjt\\Desktop\\残差输入.txt");
- out2 << "当前帧" << " 宽=" << picWidth << " 高=" << picHeight << std::endl ;
- for(int i = 0;i < picWidth;i++)
- {
- for(int j = 0;j < picHeight;j++)
- {
- out2 <<"残差值 " << "x=" << i << "y=" << j << " " << resiYpoint.at(i,j) << std::endl;
- }
- }
-
- PelBuf &piOrigtest = pcPic->getOrigBuf(area);
- AreaBuf
& OrigYpoint = piOrigtest; - std::ofstream out3("C:\\Users\\qjjt\\Desktop\\原始值输入.txt");
- out3 << "当前帧" << " 宽=" << picWidth << " 高=" << picHeight << std::endl ;
- for(int i = 0;i < picWidth;i++)
- {
- for(int j = 0;j < picHeight;j++)
- {
- out3 <<"原始值 " << "x=" << i << "y=" << j << " " << OrigYpoint.at(i,j) << std::endl;
- }
- }
注意:这里我调用getPredBuf()时需要输出参数区域信息area,很麻烦。可以在picture.cpp

记得在picture.h里面加上定义

这样就可以直接使用getPredBuf()了
HEVC中类,对象和指向对象的指针_xidianliye的博客-CSDN博客
因为编码完成时会输出码流str.bin和rec.yuv文件,所以通过找rec,找到了xCreateLib()函数
- void EncApp::xCreateLib( std::list
& recBufList, const int layerId ) - {
- // Video I/O
- m_cVideoIOYuvInputFile.open( m_inputFileName, false, m_inputBitDepth, m_MSBExtendedBitDepth, m_internalBitDepth ); // read mode
- #if EXTENSION_360_VIDEO
- m_cVideoIOYuvInputFile.skipFrames(m_FrameSkip, m_inputFileWidth, m_inputFileHeight, m_InputChromaFormatIDC);
- #else
- const int sourceHeight = m_isField ? m_iSourceHeightOrg : m_sourceHeight;
- m_cVideoIOYuvInputFile.skipFrames(m_FrameSkip, m_sourceWidth - m_sourcePadding[0], sourceHeight - m_sourcePadding[1], m_InputChromaFormatIDC);
- #endif
- if (!m_reconFileName.empty())
- {
- if (m_packedYUVMode && ((m_outputBitDepth[CH_L] != 10 && m_outputBitDepth[CH_L] != 12)
- || ((m_sourceWidth & (1 + (m_outputBitDepth[CH_L] & 3))) != 0)))
- {
- EXIT ("Invalid output bit-depth or image width for packed YUV output, aborting\n");
- }
- if (m_packedYUVMode && (m_chromaFormatIDC != CHROMA_400) && ((m_outputBitDepth[CH_C] != 10 && m_outputBitDepth[CH_C] != 12)
- || (((m_sourceWidth / SPS::getWinUnitX (m_chromaFormatIDC)) & (1 + (m_outputBitDepth[CH_C] & 3))) != 0)))
- {
- EXIT ("Invalid chroma output bit-depth or image width for packed YUV output, aborting\n");
- }
-
- std::string reconFileName = m_reconFileName;
- if( m_reconFileName.compare( "/dev/null" ) && (m_maxLayers > 1) )
- {
- size_t pos = reconFileName.find_last_of('.');
- if (pos != string::npos)
- {
- reconFileName.insert( pos, std::to_string( layerId ) );
- }
- else
- {
- reconFileName.append( std::to_string( layerId ) );
- }
- }
- m_cVideoIOYuvReconFile.open( reconFileName, true, m_outputBitDepth, m_outputBitDepth, m_internalBitDepth ); // write mode
- }
-
- // create the encoder
- m_cEncLib.create( layerId );
-
- // create the output buffer
- for( int i = 0; i < (m_iGOPSize + 1 + (m_isField ? 1 : 0)); i++ )
- {
- recBufList.push_back( new PelUnitBuf );
- }
- }
函数里的m_reconFileName就是重建yuv文件的默认名字,由此找到了recBufList。存储了重建数据,查看引用。
可找到encode()中的writeoutput函数,这就是最后输出rec.yuv的函数
- void EncApp::xWriteOutput( int iNumEncoded, std::list
& recBufList ) - {
- const InputColourSpaceConversion ipCSC = (!m_outputInternalColourSpace) ? m_inputColourSpaceConvert : IPCOLOURSPACE_UNCHANGED;
- std::list
::iterator iterPicYuvRec = recBufList.end(); - int i;
-
- for ( i = 0; i < iNumEncoded; i++ )
- {
- --iterPicYuvRec;
- }
-
- if (m_isField)
- {
- //Reinterlace fields
- for ( i = 0; i < iNumEncoded/2; i++ )
- {
- const PelUnitBuf* pcPicYuvRecTop = *(iterPicYuvRec++);
- const PelUnitBuf* pcPicYuvRecBottom = *(iterPicYuvRec++);
-
- if (!m_reconFileName.empty())
- {
- m_cVideoIOYuvReconFile.write( *pcPicYuvRecTop, *pcPicYuvRecBottom,
- ipCSC,
- false, // TODO: m_packedYUVMode,
- m_confWinLeft, m_confWinRight, m_confWinTop, m_confWinBottom, NUM_CHROMA_FORMAT, m_isTopFieldFirst );
- }
- }
- }
- else
- {
- for ( i = 0; i < iNumEncoded; i++ )
- {
- const PelUnitBuf* pcPicYuvRec = *(iterPicYuvRec++);
- if (!m_reconFileName.empty())
- {
- if( m_cEncLib.isResChangeInClvsEnabled() && m_cEncLib.getUpscaledOutput() )
- {
- const SPS& sps = *m_cEncLib.getSPS( 0 );
- const PPS& pps = *m_cEncLib.getPPS( ( sps.getMaxPicWidthInLumaSamples() != pcPicYuvRec->get( COMPONENT_Y ).width || sps.getMaxPicHeightInLumaSamples() != pcPicYuvRec->get( COMPONENT_Y ).height ) ? ENC_PPS_ID_RPR : 0 );
-
- m_cVideoIOYuvReconFile.writeUpscaledPicture( sps, pps, *pcPicYuvRec, ipCSC, m_packedYUVMode, m_cEncLib.getUpscaledOutput(), NUM_CHROMA_FORMAT, m_bClipOutputVideoToRec709Range );
- }
- else
- {
- m_cVideoIOYuvReconFile.write( pcPicYuvRec->get( COMPONENT_Y ).width, pcPicYuvRec->get( COMPONENT_Y ).height, *pcPicYuvRec, ipCSC, m_packedYUVMode,
- m_confWinLeft, m_confWinRight, m_confWinTop, m_confWinBottom, NUM_CHROMA_FORMAT, m_bClipOutputVideoToRec709Range );
- }
- }
- }
- }
- }
再看recBufList在哪里改变的,可找到compressGOP函数中的xGetBuffer()函数
- void EncGOP::xGetBuffer( PicList& rcListPic,
- std::list
& rcListPicYuvRecOut, - int iNumPicRcvd,
- int iTimeOffset,
- Picture*& rpcPic,
- int pocCurr,
- bool isField )
- {
- int i;
- //const CompArea &areas = rpcPic->block(COMPONENT_Y);
- // Rec. output
- std::list
::iterator iterPicYuvRec = rcListPicYuvRecOut.end(); -
- if (isField && pocCurr > 1 && m_iGopSize!=1)
- {
- iTimeOffset--;
- }
-
- int multipleFactor = m_pcCfg->getUseCompositeRef() ? 2 : 1;
- for (i = 0; i < (iNumPicRcvd * multipleFactor - iTimeOffset + 1); i += multipleFactor)
- {
- iterPicYuvRec--;
- }
-
- // Current pic.
- PicList::iterator iterPic = rcListPic.begin();
- while (iterPic != rcListPic.end())
- {
- rpcPic = *(iterPic);
- if( rpcPic->getPOC() == pocCurr && rpcPic->layerId == m_pcEncLib->getLayerId() )
- {
- break;
- }
- iterPic++;
- }
-
- CHECK(!(rpcPic != NULL), "Unspecified error");
- CHECK(!(rpcPic->getPOC() == pocCurr), "Unspecified error");
-
- (**iterPicYuvRec) = rpcPic->getRecoBuf();
- return;
- }
最后一句(**iterPicYuvRec) = rpcPic->getRecoBuf();但这里有个问题,如果单纯的把getRecoBuf改成getPredBuf(),编译上没错,但会在编码时出错。这里可分别看一下getRecoBuf,getPredBuf的全部引用,可发现重建加了许多额外操作。所以可能要把这些全看懂,全改才能输出预测.YUV
如果想得到预测的psnr和BDrate,可看xCalculateAddPSNRs中的xCalculateAddPSNR函数,这里负责打印输出一些指标的值,把把getRecoBuf改成getPredBuf()即可。
- void EncGOP::xCalculateAddPSNRs( const bool isField, const bool isFieldTopFieldFirst,
- const int iGOPid, Picture* pcPic, const AccessUnit&accessUnit, PicList &rcListPic,
- const int64_t dEncTime, const InputColourSpaceConversion snr_conversion,
- const bool printFrameMSE, const bool printMSSSIM, double* PSNR_Y, bool isEncodeLtRef)
- {
- xCalculateAddPSNR(pcPic, pcPic->getRecoBuf(), accessUnit, (double)dEncTime, snr_conversion,
- printFrameMSE, printMSSSIM, PSNR_Y, isEncodeLtRef);
-
- //In case of field coding, compute the interlaced PSNR for both fields
- if(isField)
- {
- bool bothFieldsAreEncoded = false;
- int correspondingFieldPOC = pcPic->getPOC();
- int currentPicGOPPoc = m_pcCfg->getGOPEntry(iGOPid).m_POC;
- if(pcPic->getPOC() == 0)
- {
- // particular case for POC 0 and 1.
- // If they are not encoded first and separately from other pictures, we need to change this
- // POC 0 is always encoded first then POC 1 is encoded
- bothFieldsAreEncoded = false;
- }
5.BDRATE的计算
BD-rate计算原理_红玉圆圆圆的博客-CSDN博客_bdrate
就去下提案JCTVC-K0279-v2,用里面那个excel表来计算BDrate。

