亚洲香蕉成人av网站在线观看_欧美精品成人91久久久久久久_久久久久久久久久久亚洲_热久久视久久精品18亚洲精品_国产精自产拍久久久久久_亚洲色图国产精品_91精品国产网站_中文字幕欧美日韩精品_国产精品久久久久久亚洲调教_国产精品久久一区_性夜试看影院91社区_97在线观看视频国产_68精品久久久久久欧美_欧美精品在线观看_国产精品一区二区久久精品_欧美老女人bb

首頁 > 學院 > 開發設計 > 正文

正式使用opencv里的訓練和檢測 - opencv_createsamples、opencv_traincascade-2.4.11版本

2019-11-09 16:57:17
字體:
來源:轉載
供稿:網友

轉自 http://blog.csdn.net/wuxiaoyao12/article/details/39227189

好久沒有來寫blog了,學生生涯終結,就不好好總結了,今天把opencv里關于adaboost訓練和檢測的過程記錄下來,方便別人也方便自己~~~啊哈哈~~~~大笑

(2015.8.28更改,見綠色)

一、基礎知識準備

首先,opencv目前僅支持三種特征的訓練檢測, HAAR、LBP、HOG,選擇哪個特征就去補充哪個吧。opencv的這個訓練算法是基于adaboost而來的,所以需要先對adaboost進行基礎知識補充啊,網上一大堆資料,同志們速度去查閱。我的資源里也有,大家去下載吧,這些我想都不是大家能直接拿來用的,我下面將直接手把手告訴大家訓練怎么操作,以及要注意哪些細節。

二、關于正樣本的準備

1、采集正樣本圖片

因為正樣本最后需要大小歸一化,所以我在采集樣本的時候就直接把它從原圖里摳出來了,方便后面縮放嘛,而不是只保存它的框個數和框位置信息(框個數、框位置信息看下一步解釋),在裁剪的過程中盡量保持樣本的長寬比例一致。比如我最后要歸一化成20 X 20,在裁剪樣本的時候,我都是20X20或者21X21、22X22等等,最大我也沒有超過30X30(不超過跟我的自身用途有關,對于人臉檢測這種要保證縮放不變性的樣本,肯定就可以超過啦),我資源里也給出可以直接用的裁剪樣本程序。

(這里我說錯了,根據createsamples.cpp ,我們不需要提前進行縮放操作,它在第3步變成vec時就包含了縮放工作.如果我們是用objectMaker標記樣本,程序同時生成的關于每一幅圖的samplesInfo信息,直接給第三步用即可。當然,你提前縮放了也沒關系,按照第2步操作即可)

2、獲取正樣本路徑列表

在你的圖片文件夾里,編寫一個bat程序(get route.bat,bat是避免每次都需要去dos框輸入,那里又不能復制又不能粘貼!),如下所示:

運行bat文件,就會生成如下dat文件:

把這個dat文件中的所有非圖片的路徑都刪掉,比如上圖的頭兩行,再將bmp 替換成 bmp 1 0 0 20 20,如下:

(1代表個數,后四個分別對應 left top width height,如果我們之前不是把樣本裁剪下來的,那么你的這個dat可能就長成這樣1. bmp 3 1 3 24 24 26 28 25 25 60 80 26 26,1.bmp是完全的原圖啊,你之前的樣本就是從這張圖上扣下來的)

3、獲取供訓練的vec文件

這里,我們得利用opencv里的一個程序叫opencv_createsamples.exe,可以把它拷貝出來。針對它的命令輸入也是寫成bat文件啦,因為cascade訓練的時候用的是vec。如下:

運行bat,就在我們得pos文件夾里生成了如下vec文件:

就此有關正樣本的東西準備結束。

(vec中其實就是保存的每一個sample圖,并且已經統一w、h大小了,如果你想看所有的sample,也可以通過調用opencv_createsamples.exe,使用操作,見附)

三、關于負樣本的準備

這個特別簡單,直接拿原始圖,不需要裁剪摳圖(不裁剪還能保證樣本的多樣性),也不需要保存框(網上說只要保證比正樣本大小大哈,大家就保證吧),只要把路徑保存下來。同正樣本類似,步驟圖如下:

至此有關負樣本的也準備完成。

四、開始訓練吧

這里我們用opencv_traincascade.exe(opencv_haartraining.exe的用法跟這個很相似,具體需要輸入哪些參數去看opencv的源碼吧,網上資料也有很多,主要是opencv_traincascade.exe比opencv_haartraining.exe包含更多的特征,功能齊全些?。?,直接上圖:

命令輸入也直接用bat文件,請務必保證好大小寫一致,不然不予識別參數。小白兔,跑起來~~~

這是程序識別到的參數,沒有錯把,如果你哪個字母打錯了,你就會發現這些參數會跟你預設的不一樣啊,所以大家一定要看清楚了~~~~

跑啊跑啊跑啊跑,如下:

這一級的強訓練器達到你預設的比例以后就跑去訓練下一級了,同志們那個HR比例不要設置太高,不然會需要好多樣本,然后stagenum不要設置太小啊,不然到時候拿去檢測速度會很慢。

等這個bat跑結束,我的xml文件也生成了。如下:

其實這個訓練可以中途停止的,因為下次開啟時它會讀取這些xml文件,接著進行上次未完成的訓練。哈哈~~~~好人性化??!

訓練結束,我要到了我的cascade.xml文件,現在我要拿它去做檢測了啊!呼呼~~~~

五、開始檢測吧

opencv有個opencv_performance.exe程序用于檢測,但是它只能用在用opencv_haartraining.exe來用的,所以我這里是針對一些列圖片進行檢測的,檢測代碼如下:

[cpp] view plain copy 在CODE上查看代碼片#include <windows.h>  #include <mmsystem.h>  #include <stdio.h>  #include <stdlib.h>  #include "wininet.h"  #include <direct.h>  #include <string.h>  #include <list>  #"  [-info <collection_file_name>]/n"  "  [-img <image_file_name>]/n"  "  [-vec <vec_file_name>]/n"  "  [-bg <background_file_name>]/n  [-num <number_of_samples = %d>]/n"  "  [-bgcolor <background_color = %d>]/n"  "  [-inv] [-randinv] [-bgthresh <background_color_threshold = %d>]/n"  "  [-maxidev <max_intensity_deviation = %d>]/n"  "  [-maxxangle <max_x_rotation_angle = %f>]/n"  "  [-maxyangle <max_y_rotation_angle = %f>]/n"  "  [-maxzangle <max_z_rotation_angle = %f>]/n"  "  [-show [<scale = %f>]]/n"  "  [-w <sample_width = %d>]/n  [-h <sample_height = %d>]/n"//默認24*24  

以下1)~4)是按順序判斷,且有且僅有一個

1)提供imagename%20和vecname時,調用以下操作[cpp] view%20plain copy /*  * cvCreateTrainingSamples  *  * Create training samples applying random distortions to sample image and  * store them in .vec file  *  * filename        - .vec file name  * imgfilename     - sample image file name  * bgcolor         - background color for sample image  * bgthreshold     - background color threshold. Pixels those colors are in range  *   [bgcolor-bgthreshold, bgcolor+bgthreshold] are considered as transparent  * bgfilename      - background descr/*  * cvCreateTrainingSamplesFromInfo  *  * Create training samples from a set of marked up images and store them into .vec file  * infoname    - file in which marked up image descriptions are stored  * num         - desired number of samples  * showsamples - if not 0 samples will be shown  * winwidth    - sample width  * winheight   - sample height  *   * Return number of successfully created samples  */  int cvCreateTrainingSamplesFromInfo( const char* infoname, const char* vecfilename,                                       int num,                                       int showsamples,                                       int winwidth, int winheight )  函數內容:讀取當前圖中所有標記的sample(x,y,w,h),并將其縮放到winwidth、winheight大小,故在這之前的人為縮放操作不需要

(可以看到,僅需要num、w、h參數)4)僅vecname時,可以將vec里面的所有縮放后的samples都顯示出來[cpp] view%20plain copy /*  * cvShowVecSamples  *  * Shows samples stored in .vec file  *  * filename  *   .vec file name  * winwidth  *   sample width  * winheight  *   sample height  * scale  *   the scale each sample is adjusted to(這個scale與3中的縮放不是一回事,這里僅為了顯示而再次縮放)  */  void cvShowVecSamples( const char* filename, int winwidth, int winheight, double scale );  2、opencv_haartraining.exe的參數

(haartraining.cpp )

[cpp] view%20plain copy "  -data <dir_name>/n"  "  -vec <vec_file_name>/n"  "  -bg <background_file_name>/n"  "  [-bg-vecfile]/n"  "  [-npos <number_of_positive_samples = %d>]/n"  "  [-nneg <number_of_negative_samples = %d>]/n"  "  [-nstages <number_of_stages = %d>]/n"  "  [-nsplits <number_of_splits = %d>]/n"  "  [-mem <memory_in_MB = %d>]/n"  "  [-sym (default)] [-nonsym]/n"  "  [-minhitrate <min_hit_rate = %f>]/n"  "  [-maxfalsealarm <max_false_alarm_rate = %f>]/n"  "  [-weighttrimming <weight_trimming = %f>]/n"  "  [-eqw]/n"  "  [-mode <BASIC (default) | CORE | ALL>]/n"  "  [-w <sample_width = %d>]/n"  "  [-h <sample_height = %d>]/n"  "  [-bt <DAB | RAB | LB | GAB (default)>]/n"  "  [-err <misclass (default) | gini | entropy>]/n"  "  [-maxtreesplits <max_number_of_splits_in_tree_cascade = %d>]/n"  "  [-minpos <min_number_of_positive_samples_per_cluster = %d>]/n"  

3、opencv_performance.exe參數

(performance.cpp )

[cpp] view%20plain copy "  -data <classifier_directory_name>/n"  "  -info <collection_file_name>/n"  "  [-maxSizeDiff <max_size_difference = %f>]/n"  "  [-maxPosDiff <max_position_difference = %f>]/n"  "  [-sf <scale_factor = %f>]/n"  "  [-ni <saveDetected = 0>]/n"  "  [-nos <number_of_stages = %d>]/n"  "  [-rs <roc_size = %d>]/n"  "  [-w <sample_width = %d>]/n"  "  [-h <sample_height = %d>]/n"  

4、opencv_traincascade.exe參數說明

——traincascade.cpp 

[cpp] view%20plain copy  cout << "Usage: " << argv[0] << endl;   cout << "  -data <cascade_dir_name>" << endl;   cout << "  -vec <vec_file_name>" << endl;   cout << "  -bg <background_file_name>" << endl;   cout << "  [-numPos <number_of_positive_samples = " << numPos << ">]" << endl;   //默認2000   cout << "  [-numNeg <number_of_negative_samples = " << numNeg << ">]" << endl;   //默認1000   cout << "  [-numStages <number_of_stages = " << numStages << ">]" << endl;   //默認20   cout << "  [-precalcValBufSize <precalculated_vals_buffer_size_in_Mb = " << precalcValBufSize << ">]" << endl;//默認256   cout << "  [-precalcIdxBufSize <precalculated_idxs_buffer_size_in_Mb = " << precalcIdxBufSize << ">]" << endl;//默認256   cout << "  [-baseFormatSave]" << endl;                     //是否按照舊版存xml文件默認false  // cout << "  [-numThreads <max_number_of_threads = " << numThreads << ">]" << endl;//這個參數在3.0版本中才出現,默認numThreads = getNumThreads();  // cout << "  [-acceptanceRatioBreakValue <value> = " << acceptanceRatioBreakValue << ">]" << endl;//這個參數在3.0版本中才出現,默認-1.0   cascadeParams.printDefaults();   stageParams.printDefaults();   for( int fi = 0; fi < fc; fi++ )       featureParams[fi]->printDefaults();  

其中cascadeParams.printDefaults();——cascadeclassifier.cpp 如下

[cpp] view%20plain copy cout << "  [-stageType <";                                                 //默認BOOST  for( int i = 0; i < (int)(sizeof(stageTypes)/sizeof(stageTypes[0])); i++ )  {      cout << (i ? " | " : "") << stageTypes[i];      if ( i == defaultStageType )          cout << "(default)";  }  cout << ">]" << endl;    cout << "  [-featureType <{";                                              //默認HAAR  for( int i = 0; i < (int)(sizeof(featureTypes)/sizeof(featureTypes[0])); i++ )  {      cout << (i ? ", " : "") << featureTypes[i];      if ( i == defaultStageType )          cout << "(default)";  }  cout << "}>]" << endl;  cout << "  [-w <sampleWidth = " << winSize.width << ">]" << endl;        //默認24*24  cout << "  [-h <sampleHeight = " << winSize.height << ">]" << endl;  stageParams.printDefaults();——boost.cpp如下

[cpp] view%20plain copy cout << "--boostParams--" << endl;  cout << "  [-bt <{" << CC_DISCRETE_BOOST << ", "                      << CC_REAL_BOOST << ", "                      << CC_LOGIT_BOOST ", "                      << CC_GENTLE_BOOST << "(default)}>]" << endl;                         //默認CC_GENTLE_BOOST   cout << "  [-minHitRate <min_hit_rate> = " << minHitRate << ">]" << endl;                 //默認0.995  cout << "  [-maxFalseAlarmRate <max_false_alarm_rate = " << maxFalseAlarm << ">]" << endl;//默認0.5  cout << "  [-weightTrimRate <weight_trim_rate = " << weight_trim_rate << ">]" << endl;    //默認0.95  cout << "  [-maxDepth <max_depth_of_weak_tree = " << max_depth << ">]" << endl;           //默認1  cout << "  [-maxWeakCount <max_weak_tree_count = " << weak_count << ">]" << endl;         //默認100  featureParams[fi]->printDefaults();——haarfeatures.cpp%20如下

[cpp] view%20plain copy cout << "  [-mode <" CC_MODE_BASIC << "(default)| "  //默認CC_MODE_BASIC             << CC_MODE_CORE <<" | " << CC_MODE_ALL << endl;  

通用參數:

-data<cascade_dir_name>

目錄名,如不存在訓練程序會創建它,用于存放訓練好的分類器

-vec<vec_file_name>

包含正樣本的vec文件名(由 opencv_createsamples 程序生成)

-bg<background_file_name>

背景描述文件,也就是包含負樣本文件名的那個描述文件

-numPos<number_of_positive_samples>

每級分類器訓練時所用的正樣本數目

-numNeg<number_of_negative_samples>

每級分類器訓練時所用的負樣本數目,可以大于 -bg 指定的圖片數目

-numStages<number_of_stages>

訓練的分類器的級數。

-precalcValBufSize<precalculated_vals_buffer_size_in_Mb>

緩存大小,用于存儲預先計算的特征值(feature values),單位為MB

-precalcIdxBufSize<precalculated_idxs_buffer_size_in_Mb>

緩存大小,用于存儲預先計算的特征索引(feature indices),單位為MB。內存越大,訓練時間越短

-baseFormatSave

這個參數僅在使用Haar特征時有效。如果指定這個參數,那么級聯分類器將以老的格式存儲

級聯參數:

-stageType<BOOST(default)>

級別(stage)參數。目前只支持將BOOST分類器作為級別的類型

-featureType<{HAAR(default),LBP}>

特征的類型: HAAR - 類Haar特征;LBP - 局部紋理模式特征

-w<sampleWidth>

-h<sampleHeight>

訓練樣本的尺寸(單位為像素)。必須跟訓練樣本創建(使用 opencv_createsamples 程序創建)時的尺寸保持一致

Boosted分類器參數:

-bt<{DAB,RAB,LB,GAB(default)}>

Boosted分類器的類型: DAB - Discrete AdaBoost,RAB - Real AdaBoost,LB - LogitBoost, GAB - Gentle AdaBoost

-minHitRate<min_hit_rate>

分類器的每一級希望得到的最小檢測率(正樣本被判成正樣本的比例)。總的檢測率大約為 min_hit_rate^number_of_stages??梢栽O很高,如0.999

-maxFalseAlarmRate<max_false_alarm_rate>

分類器的每一級希望得到的最大誤檢率(負樣本被判成正樣本的比例)。總的誤檢率大約為 max_false_alarm_rate^number_of_stages??梢栽O較低,如0.5

-weightTrimRate<weight_trim_rate>

Specifies whether trimming should be used and its weight. 一個還不錯的數值是0.95

-maxDepth<max_depth_of_weak_tree>

弱分類器樹最大的深度。一個還不錯的數值是1,是二叉樹(stumps)

-maxWeakCount<max_weak_tree_count>

每一級中的弱分類器的最大數目。The boosted classifier (stage) will have so many weak trees (<=maxWeakCount), as needed to achieve the given-maxFalseAlarmRate

類Haar特征參數:

-mode<BASIC(default)| CORE|ALL>

選擇訓練過程中使用的Haar特征的類型。 BASIC 只使用右上特征, ALL 使用所有右上特征和45度旋轉特征

5、detectMultiScale函數參數說明

該函數會在輸入圖像的不同尺度中檢測目標:

image %20-輸入的灰度圖像,

objects  -被檢測到的目標矩形框向量組,

scaleFactor  -為每一個圖像尺度中的尺度參數,默認值為1.1

minNeighbors  -為每一個級聯矩形應該保留的鄰近個數,默認為3,表示至少有3次檢測到目標,才認為是目標

flags -CV_HAAR_DO_CANNY_PRUNING,利用Canny邊緣檢測器來排除一些邊緣很少或者很多的圖像區域;

  %20 %20 %20 %20  CV_HAAR_SCALE_IMAGE,按比例正常檢測;

    %20 %20 %20  CV_HAAR_FIND_BIGGEST_OBJECT,只檢測最大的物體;

  %20 %20 %20 %20  CV_HAAR_DO_ROUGH_SEARCH,只做粗略檢測。默認值是0

minSize和maxSize -用來限制得到的目標區域的范圍(先找maxsize,再用1.1參數縮小,直到小于minSize終止檢測)

6、opencv關于Haar介紹

(haarfeatures.cpp%20——opencv3.0)

Detailed%20DescriptionHaar%20Feature-based%20Cascade%20Classifier%20for%20Object%20Detection

The%20object%20detector%20described%20below%20has%20been%20initially%20proposed%20by%20Paul%20Viola [pdf] and%20improved%20by%20Rainer%20Lienhart [pdf] .

First,%20a%20classifier%20(namely%20a cascade%20of%20boosted%20classifiers%20working%20with%20haar-like%20features)%20is%20trained%20with%20a%20few%20hundred%20sample%20views%20of%20a%20particular%20object%20(i.e.,%20a%20face%20or%20a%20car),%20called%20positive%20examples,%20that%20are%20scaled%20to%20the%20same%20size%20(say,%2020x20),%20and%20negative%20examples%20-%20arbitrary%20images%20of%20the%20same%20size.

After%20a%20classifier%20is%20trained,%20it%20can%20be%20applied%20to%20a%20region%20of%20interest%20(of%20the%20same%20size%20as%20used%20during%20the%20training)%20in%20an%20input%20image.%20The%20classifier%20outputs%20a%20"1"%20if%20the%20region%20is%20likely%20to%20show%20the%20object%20(i.e.,%20face/car),%20and%20"0"%20otherwise.%20To%20search%20for%20the%20object%20in%20the%20whole%20image%20one%20can%20move%20the%20search%20window%20across%20the%20image%20and%20check%20every%20location%20using%20the%20classifier.%20The%20classifier%20is%20designed%20so%20that%20it%20can%20be%20easily%20"resized"%20in%20order%20to%20be%20able%20to%20find%20the%20objects%20of%20interest%20at%20different%20sizes,%20which%20is%20more%20efficient%20than%20resizing%20the%20image%20itself.%20So,%20to%20find%20an%20object%20of%20an%20unknown%20size%20in%20the%20image%20the%20scan%20procedure%20should%20be%20done%20several%20times%20at%20different%20scales.

The%20Word%20"cascade"%20in%20the%20classifier%20name%20means%20that%20the%20resultant%20classifier%20consists%20of%20several%20simpler%20classifiers%20(stages)%20that%20are%20applied%20subsequently%20to%20a%20region%20of%20interest%20until%20at%20some%20stage%20the%20candidate%20is%20rejected%20or%20all%20the%20stages%20are%20passed.%20The%20word%20"boosted"%20means%20that%20the%20classifiers%20at%20every%20stage%20of%20the%20cascade%20are%20complex%20themselves%20and%20they%20are%20built%20out%20of%20basic%20classifiers%20using%20one%20of%20four%20different%20boosting%20techniques%20(weighted%20voting).%20Currently%20Discrete%20Adaboost,%20Real%20Adaboost,%20Gentle%20Adaboost%20and%20Logitboost%20are%20supported.%20The%20basic%20classifiers%20are%20decision-tree%20classifiers%20with%20at%20least%202%20leaves.%20Haar-like%20features%20are%20the%20input%20to%20the%20basic%20classifiers,%20and%20are%20calculated%20as%20described%20below.%20The%20current%20algorithm%20uses%20the%20following%20Haar-like%20features:

image

The feature used in a particular classifier is specified by its shape (1a, 2b etc.), position within the region of interest and the scale (this scale is not the same as the scale used at the detection stage, though these two scales are multiplied). For example, in the case of the third line feature (2c) the response is calculated as the difference between the sum of image pixels under the rectangle covering the whole feature (including the two white stripes and the black stripe in the middle) and the sum of the image pixels under the black stripe multiplied by 3 in order to compensate for the differences in the size of areas. The sums of pixel values over a rectangular regions are calculated rapidly using integral images (see below and the integral description).

To see the object detector at work, have a look at the facedetect demo: https://github.com/Itseez/opencv/tree/master/samples/cpp/dbt_face_detection.cpp

The following reference is for the detection part only. There is a separate application called opencv_traincascade that can train a cascade of boosted classifiers from a set of samples.

Note

In the new C++ interface it is also possible to use LBP (local binary pattern) features in addition to Haar-like features. .. [Viola01] Paul Viola and Michael J. Jones. Rapid Object Detection using a Boosted Cascade of Simple Features. IEEE CVPR, 2001. The paper is available online at https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf(上述有提到)

7、opencv關于boost

(boost.cpp——opencv3.0)

Boosting

A common machine learning task is supervised learning. In supervised learning, the goal is to learn the functional relationship F:y=F(x) between the input x and the output y . Predicting the qualitative output is called classification, while predicting the quantitative output is called regression.

Boosting is a powerful learning concept that provides a solution to the supervised classification learning task. It combines the performance of many "weak" classifiers to produce a powerful committee [125] . A weak classifier is only required to be better than chance, and thus can be very simple and computationally inexpensive. However, many of them smartly combine results to a strong classifier that often outperforms most "monolithic" strong classifiers such as SVMs and Neural Networks.

Decision trees are the most popular weak classifiers used in boosting schemes. Often the simplest decision trees with only a single split node per tree (called stumps ) are sufficient.

The boosted model is based on N training examples (xi,yi)1N with xi∈RK and yi∈?1,+1 . xi is a K -component vector. Each component encodes a feature relevant to the learning task at hand. The desired two-class output is encoded as -1 and +1.

Different variants of boosting are known as Discrete Adaboost, Real AdaBoost, LogitBoost, and Gentle AdaBoost [49] . All of them are very similar in their overall structure. Therefore, this chapter focuses only on the standard two-class Discrete AdaBoost algorithm, outlined below. Initially the same weight is assigned to each sample (step 2). Then, a weak classifier fm(x) is trained on the weighted training data (step 3a). Its weighted training error and scaling factor cm is computed (step 3b). The weights are increased for training samples that have been misclassified (step 3c). All weights are then normalized, and the process of finding the next weak classifier continues for another M -1 times. The final classifier F(x) is the sign of the weighted sum over the individual weak classifiers (step 4).

Two-class Discrete AdaBoost Algorithm

Set N examples (xi,yi)1N with xi∈RK,yi∈?1,+1 .Assign weights as wi=1/N,i=1,...,N .Repeat for m=1,2,...,M :Fit the classifier fm(x)∈?1,1, using weights wi on the training data.Compute errm=Ew[1(y≠fm(x))],cm=log((1?errm)/errm) .Set wi?wiexp[cm1(yi≠fm(xi))],i=1,2,...,N, and renormalize so that Σiwi=1 .Classify new samples x using the formula: sign(Σm=1Mcmfm(x)) .NoteSimilar to the classical boosting methods, the current implementation supports two-class classifiers only. For M > 2 classes, there is theAdaBoost.MH algorithm (described in [49]) that reduces the problem to the two-class problem, yet with a much larger training set.

To reduce computation time for boosted models without substantially losing accuracy, the influence trimming technique can be employed. As the training algorithm proceeds and the number of trees in the ensemble is increased, a larger number of the training samples are classified correctly and with increasing confidence, thereby those samples receive smaller weights on the subsequent iterations. Examples with a very low relative weight have a small impact on the weak classifier training. Thus, such examples may be excluded during the weak classifier training without having much effect on the induced classifier. This process is controlled with the weight_trim_rate parameter. Only examples with the summary fraction weight_trim_rate of the total weight mass are used in the weak classifier training. Note that the weights for all training examples are recomputed at each training iteration. Examples deleted at a particular iteration may be used again for learning some of the weak classifiers further [49]

See alsocv::ml::Boost

Prediction with Boost

StatModel::predict(samples, results, flags) should be used. Pass flags=StatModel::RAW_OUTPUT to get the raw sum from Boost classifier.

8、關于訓練過程打印信息的解釋

1)POS count : consumed   n1 : n2

每次都調用updateTrainingSet( requiredLeafFARate, tempLeafFARate );函數

[cpp] view plain copy 在CODE上查看代碼片bool CvCascadeClassifier::updateTrainingSet( double minimumAcceptanceRatio, double& acceptanceRatio)  {      int64 posConsumed = 0, negConsumed = 0;      imgReader.restart();      int posCount = fillPassedSamples( 0, numPos, true, 0, posConsumed );//Consumed消耗      if( !posCount )          return false;      cout << "POS count : consumed   " << posCount << " : " << (int)posConsumed << endl;//這就是打印信息,我的理解是這個stage判成正樣本數和正樣本數        int proNumNeg = cvRound( ( ((double)numNeg) * ((double)posCount) ) / numPos ); // apply only a fraction of negative samples. double is required since overflow is possible      int negCount = fillPassedSamples( posCount, proNumNeg, false, minimumAcceptanceRatio, negConsumed );      if ( !negCount )          return false;        curNumSamples = posCount + negCount;      acceptanceRatio = negConsumed == 0 ? 0 : ( (double)negCount/(double)(int64)negConsumed );      cout << "NEG count : acceptanceRatio    " << negCount << " : " << acceptanceRatio << endl;//打印信息,我的理解是      return true;  }  [cpp] view%20plain copy int CvCascadeClassifier::fillPassedSamples( int first, int count, bool isPositive, double minimumAcceptanceRatio, int64& consumed )  {      int getcount = 0;      Mat img(cascadeParams.winSize, CV_8UC1);      for( int i = first; i < first + count; i++ )      {          for( ; ; )          {              if( consumed != 0 && ((double)getcount+1)/(double)(int64)consumed <= minimumAcceptanceRatio )                  return getcount;                bool isGetImg = isPositive ? imgReader.getPos( img ) :                                             imgReader.getNeg( img );              if( !isGetImg )                  return getcount;              consumed++;                featureEvaluator->setImage( img, isPositive ? 1 : 0, i );              if( predict( i ) == 1.0F )              {                  getcount++;                  printf("%s current samples: %d/r", isPositive ? "POS":"NEG", getcount);                  break;              }          }      }      return getcount;  }  [cpp] view%20plain copy int CvCascadeClassifier::predict( int sampleIdx )  {      CV_DbgAssert( sampleIdx < numPos + numNeg );      for (vector< Ptr<CvCascadeBoost> >::iterator it = stageClassifiers.begin();          it != stageClassifiers.end(); it++ )      {          if ( (*it)->predict( sampleIdx ) == 0.f )              return 0;      }      return 1;  }  [cpp] view%20plain copy 派生到我的代碼片float CvCascadeBoost::predict( int sampleIdx, bool returnSum ) const  {      CV_Assert( weak );      double sum = 0;      CvSeqReader reader;      cvStartReadSeq( weak, &reader );      cvSetSeqReaderPos( &reader, 0 );      for( int i = 0; i < weak->total; i++ )      {          CvBoostTree* wtree;          CV_READ_SEQ_ELEM( wtree, reader );          sum += ((CvCascadeBoostTree*)wtree)->predict(sampleIdx)->value;      }      if( !returnSum )          sum = sum < threshold - CV_THRESHOLD_EPS ? 0.0 : 1.0;      return (float)sum;  }  
發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
亚洲香蕉成人av网站在线观看_欧美精品成人91久久久久久久_久久久久久久久久久亚洲_热久久视久久精品18亚洲精品_国产精自产拍久久久久久_亚洲色图国产精品_91精品国产网站_中文字幕欧美日韩精品_国产精品久久久久久亚洲调教_国产精品久久一区_性夜试看影院91社区_97在线观看视频国产_68精品久久久久久欧美_欧美精品在线观看_国产精品一区二区久久精品_欧美老女人bb
久久99久久亚洲国产| 亚洲福利视频在线| 国产成人精品一区二区在线| 亚洲国产精品久久91精品| 国产+成+人+亚洲欧洲| 夜夜嗨av一区二区三区四区| 欧美俄罗斯乱妇| 欧洲亚洲在线视频| 国产盗摄xxxx视频xxx69| 免费不卡欧美自拍视频| 国产精品嫩草影院一区二区| 69av成年福利视频| 中文字幕久久亚洲| 久久综合久久美利坚合众国| 中文字幕亚洲第一| 隔壁老王国产在线精品| 亚洲午夜未删减在线观看| 国产网站欧美日韩免费精品在线观看| 国产成人亚洲综合91| 自拍偷拍亚洲在线| 深夜福利亚洲导航| 亚洲综合自拍一区| 久久人人看视频| 亚洲欧美制服第一页| 欧美极品第一页| 亚洲影院色无极综合| 久久婷婷国产麻豆91天堂| 日韩精品亚洲视频| 国产91精品黑色丝袜高跟鞋| 欧美在线一级va免费观看| 欧美老少做受xxxx高潮| 51精品国产黑色丝袜高跟鞋| 欧美日韩另类字幕中文| 清纯唯美亚洲综合| 国产精品高潮呻吟视频| 久久男人av资源网站| 欧美精品18videos性欧美| 日韩高清有码在线| 岛国精品视频在线播放| 成人免费视频在线观看超级碰| 美女扒开尿口让男人操亚洲视频网站| 丁香五六月婷婷久久激情| 国产精彩精品视频| 一本久久综合亚洲鲁鲁| 日韩在线国产精品| 欧美国产日韩二区| 精品国产一区二区三区在线观看| 亚洲视屏在线播放| 欧美限制级电影在线观看| 日韩电影免费在线观看中文字幕| 国产香蕉精品视频一区二区三区| 黑丝美女久久久| 日韩a**站在线观看| 久久久999成人| 亚洲综合在线做性| 亚洲成人激情在线观看| 欧美在线视频网| 2019中文字幕全在线观看| 黄色成人av在线| 久久久久久国产精品久久| 欧美性高潮床叫视频| 国产精品久久久久久久久久新婚| 亚洲精品国产综合久久| 日韩在线激情视频| 国产精品黄页免费高清在线观看| 欧美在线视频观看免费网站| 亚洲自拍欧美色图| 久久久久久久久久久91| 国产在线观看精品一区二区三区| 97涩涩爰在线观看亚洲| 久久精品美女视频网站| 2019最新中文字幕| 欧美一级片在线播放| 国产一区二区日韩| 亚洲成人久久一区| 日本成人激情视频| 国产日韩欧美夫妻视频在线观看| 少妇高潮久久久久久潘金莲| 国产成人综合久久| 亚洲成年人影院在线| 永久免费毛片在线播放不卡| 久久伊人91精品综合网站| 亚洲精品电影在线观看| 久久99久国产精品黄毛片入口| 欧美裸体男粗大视频在线观看| 欧美日韩在线观看视频| 欧美激情手机在线视频| 欧美大片va欧美在线播放| 另类专区欧美制服同性| 精品视频在线观看日韩| 搡老女人一区二区三区视频tv| 久久夜色精品国产| 亚洲成人999| 欧美中文字幕视频在线观看| 中文字幕亚洲一区二区三区五十路| 亚洲国产天堂久久国产91| 亚洲日本中文字幕免费在线不卡| 日韩精品视频在线播放| 疯狂欧美牲乱大交777| 亚洲人成电影网站色| 成人精品视频久久久久| 欧美丰满少妇xxxxx| 97香蕉久久夜色精品国产| 成人有码在线视频| 欧美电影在线观看完整版| 欧美日韩在线另类| 中文欧美在线视频| 精品亚洲永久免费精品| 日韩精品中文字幕在线| 在线看欧美日韩| 欧美在线观看www| 国产精品自拍网| 欧美激情亚洲激情| 亚洲国产三级网| 欧美日韩在线视频一区| 最新日韩中文字幕| 国产一区二区三区在线播放免费观看| 亚洲va电影大全| 日韩精品中文字幕在线观看| 久久中文字幕在线| 日韩在线观看免费| 日韩在线视频网| 欧美交受高潮1| 欧美一级大片在线观看| 一区二区三区动漫| 九九九久久国产免费| 亚洲free性xxxx护士hd| 美女扒开尿口让男人操亚洲视频网站| 精品视频在线播放色网色视频| 精品高清一区二区三区| 91沈先生在线观看| 亚洲毛片在线观看| 岛国av午夜精品| 亚洲在线一区二区| 国产精品99一区| 亚洲天堂一区二区三区| 亚洲性生活视频在线观看| 欧美日韩中文字幕在线视频| 午夜精品久久久久久久99热| 精品久久久久久久久久久久久久| 欧美国产精品人人做人人爱| 亚洲三级黄色在线观看| 热久久美女精品天天吊色| 91精品国产91久久久久久| 亚洲黄色成人网| 国产美女直播视频一区| 亚洲aⅴ日韩av电影在线观看| 欧美性猛交xxx| 国产精品69av| 色婷婷亚洲mv天堂mv在影片| 欧美性黄网官网| 97久久久免费福利网址| 操日韩av在线电影| 欧美xxxx做受欧美| 亚洲人高潮女人毛茸茸| 精品国产欧美一区二区五十路| 国产不卡一区二区在线播放| 草民午夜欧美限制a级福利片| 国产精品视频1区| 久久欧美在线电影| 欧美在线www| 国产精品日韩在线| 日本亚洲欧美成人| 亚洲爱爱爱爱爱|