一. 線程池的簡介
通常我們使用多線程的方式是,需要時創建一個新的線程,在這個線程里執行特定的任務,然后在任務完成后退出。這在一般的應用里已經能夠滿足我們應用的需求,畢竟我們并不是什么時候都需要創建大量的線程,并在它們執行一個簡單的任務后銷毀。
但是在一些web、email、database等應用里,比如彩鈴,我們的應用在任何時候都要準備應對數目巨大的連接請求,同時,這些請求所要完成的任務卻又可能非常的簡單,即只占用很少的處理時間。這時,我們的應用有可能處于不停的創建線程并銷毀線程的狀態。雖說比起進程的創建,線程的創建時間已經大大縮短,但是如果需要頻繁的創建線程,并且每個線程所占用的處理時間又非常簡短,則線程創建和銷毀帶給處理器的額外負擔也是很可觀的。
線程池的作用正是在這種情況下有效的降低頻繁創建銷毀線程所帶來的額外開銷。一般來說,線程池都是采用預創建的技術,在應用啟動之初便預先創建一定數目的線程。應用在運行的過程中,需要時可以從這些線程所組成的線程池里申請分配一個空閑的線程,來執行一定的任務,任務完成后,并不是將線程銷毀,而是將它返還給線程池,由線程池自行管理。如果線程池中預先分配的線程已經全部分配完畢,但此時又有新的任務請求,則線程池會動態的創建新的線程去適應這個請求。當然,有可能,某些時段應用并不需要執行很多的任務,導致了線程池中的線程大多處于空閑的狀態,為了節省系統資源,線程池就需要動態的銷毀其中的一部分空閑線程。因此,線程池都需要一個管理者,按照一定的要求去動態的維護其中線程的數目。
基于上面的技術,線程池將頻繁創建和銷毀線程所帶來的開銷分攤到了每個具體執行的任務上,執行的次數越多,則分攤到每個任務上的開銷就越小。
當然,如果線程創建銷毀所帶來的開銷與線程執行任務的開銷相比微不足道,可以忽略不計,則線程池并沒有使用的必要。比如,FTP、Telnet等應用時。
二. 線程池的設計
下面利用C語言來實現一個簡單的線程池,為了使得這個線程池庫使用起來更加方便,特在C實現中加入了一些OO的思想,與Objective-C不同,它僅僅是使用了struct來模擬了c++中的類,其實這種方式在linux內核中大量可見。
在這個庫里,與用戶有關的接口主要有:
typedef struct tp_work_desc_s tp_work_desc; //應用線程執行任務時所需要的一些信息
typedef struct tp_work_s tp_work; //線程執行的任務
typedef struct tp_thread_info_s tp_thread_info; //描述了各個線程id,是否空閑,執行的任務等信息
typedef struct tp_thread_pool_s tp_thread_pool; // 有關線程池操作的接口信息
//thread parm
struct tp_work_desc_s{
……
};
//base thread struct
struct tp_work_s{
//main process function. user interface
void (*process_job)(tp_work *this, tp_work_desc *job);
};
tp_thread_pool *creat_thread_pool(int min_num, int max_num);
tp_work_desc_s表示應用線程執行任務時所需要的一些信息,會被當作線程的參數傳遞給每個線程,依據應用的不同而不同,需要用戶定義結構的內容。tp_work_s就是我們希望線程執行的任務了。當我們申請分配一個新的線程時,首先要明確的指定這兩個結構,即該線程完成什么任務,并且完成這個任務需要哪些額外的信息。接口函數creat_thread_pool用來創建一個線程池的實例,使用時需要指定該線程池實例所能容納的最小線程數min_num和最大線程數max_num。最小線程數即線程池創建時預創建的線程數目,這個數目的大小也直接影響了線程池所能起到的效果,如果指定的太小,線程池中預創建的線程很快就將分配完畢并需要創建新的線程來適應不斷的請求,如果指定的太大,則將可能會有大量的空閑線程。我們需要根據自己應用的實際需要進行指定。描述線程池的結構如下:
//main thread pool struct
struct tp_thread_pool_s{
TPBOOL (*init)(tp_thread_pool *this);
void (*close)(tp_thread_pool *this);
void (*process_job)(tp_thread_pool *this, tp_work *worker, tp_work_desc *job);
int (*get_thread_by_id)(tp_thread_pool *this, int id);
TPBOOL (*add_thread)(tp_thread_pool *this);
TPBOOL (*delete_thread)(tp_thread_pool *this);
int (*get_tp_status)(tp_thread_pool *this);
int min_th_num; //min thread number in the pool
int cur_th_num; //current thread number in the pool
int max_th_num; //max thread number in the pool
pthread_mutex_t tp_lock;
pthread_t manage_thread_id; //manage thread id num
tp_thread_info *thread_info; //work thread relative thread info
};
結構tp_thread_info_s描述了各個線程id、是否空閑、執行的任務等信息,用戶并不需要關心它。
//thread info
struct tp_thread_info_s{
pthread_t thread_id; //thread id num
TPBOOL is_busy; //thread status:true-busy;flase-idle
pthread_cond_t thread_cond;
pthread_mutex_t thread_lock;
tp_work *th_work;
tp_work_desc *th_job;
};
tp_thread_pool_s結構包含了有關線程池操作的接口和變量。在使用creat_thread_pool返回一個線程池實例之后,首先要使用明確使用init接口對它進行初始化。在這個初始化過程中,線程池會預創建指定的最小線程數目的線程,它們都處于阻塞狀態,并不損耗CPU,但是會占用一定的內存空間。同時init也會創建一個線程池的管理線程,這個線程會在線程池的運行周期內一直執行,它將定時的查看分析線程池的狀態,如果線程池中空閑的線程過多,它會刪除部分空閑的線程,當然它并不會使所有線程的數目小于指定的最小線程數。在已經創建并初始化了線程池之后,我們就可以指定tp_work_desc_s和tp_work_s結構,并使用線程池的process_job接口來執行它們。這些就是我們使用這個線程池時所需要了解的所有東西。如果不再需要線程池,可以使用close接口銷毀它。
三. 實現代碼
Thread-pool.h(頭文件):
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <pthread.h>
#include <signal.h>
#ifndef TPBOOL
typedef int TPBOOL;
#endif
#ifndef TRUE
#define TRUE 1
#endif
#ifndef FALSE
#define FALSE 0
#endif
#define BUSY_THRESHOLD 0.5 //(busy thread)/(all thread threshold)
#define MANAGE_INTERVAL 5 //tp manage thread sleep interval
typedef struct tp_work_desc_s tp_work_desc;
typedef struct tp_work_s tp_work;
typedef struct tp_thread_info_s tp_thread_info;
typedef struct tp_thread_pool_s tp_thread_pool;
//thread parm
struct tp_work_desc_s{
char *inum; //call in
char *onum; //call out
int chnum; //channel num
};
//base thread struct
struct tp_work_s{
//main process function. user interface
void (*process_job)(tp_work *this, tp_work_desc *job);
};
//thread info
struct tp_thread_info_s{
pthread_t thread_id; //thread id num
TPBOOL is_busy; //thread status:true-busy;flase-idle
pthread_cond_t thread_cond;
pthread_mutex_t thread_lock;
tp_work *th_work;
tp_work_desc *th_job;
};
//main thread pool struct
struct tp_thread_pool_s{
TPBOOL (*init)(tp_thread_pool *this);
void (*close)(tp_thread_pool *this);
void (*process_job)(tp_thread_pool *this, tp_work *worker, tp_work_desc *job);
int (*get_thread_by_id)(tp_thread_pool *this, int id);
TPBOOL (*add_thread)(tp_thread_pool *this);
TPBOOL (*delete_thread)(tp_thread_pool *this);
int (*get_tp_status)(tp_thread_pool *this);
int min_th_num; //min thread number in the pool
int cur_th_num; //current thread number in the pool
int max_th_num; //max thread number in the pool
pthread_mutex_t tp_lock;
pthread_t manage_thread_id; //manage thread id num
tp_thread_info *thread_info; //work thread relative thread info
};
tp_thread_pool *creat_thread_pool(int min_num, int max_num);
Thread-pool.c(實現文件):
#include "thread-pool.h"
static void *tp_work_thread(void *pthread);
static void *tp_manage_thread(void *pthread);
static TPBOOL tp_init(tp_thread_pool *this);
static void tp_close(tp_thread_pool *this);
static void tp_process_job(tp_thread_pool *this, tp_work *worker, tp_work_desc *job);
static int tp_get_thread_by_id(tp_thread_pool *this, int id);
static TPBOOL tp_add_thread(tp_thread_pool *this);
static TPBOOL tp_delete_thread(tp_thread_pool *this);
static int tp_get_tp_status(tp_thread_pool *this);
/**
* user interface. creat thread pool.
* para:
* num: min thread number to be created in the pool
* return:
* thread pool struct instance be created successfully
*/
tp_thread_pool *creat_thread_pool(int min_num, int max_num){
tp_thread_pool *this;
this = (tp_thread_pool*)malloc(sizeof(tp_thread_pool));
memset(this, 0, sizeof(tp_thread_pool));
//init member function ponter
this->init = tp_init;
this->close = tp_close;
this->process_job = tp_process_job;
this->get_thread_by_id = tp_get_thread_by_id;
this->add_thread = tp_add_thread;
this->delete_thread = tp_delete_thread;
this->get_tp_status = tp_get_tp_status;
//init member var
this->min_th_num = min_num;
this->cur_th_num = this->min_th_num;
this->max_th_num = max_num;
pthread_mutex_init(&this->tp_lock, NULL);
//malloc mem for num thread info struct
if(NULL != this->thread_info)
free(this->thread_info);
this->thread_info = (tp_thread_info*)malloc(sizeof(tp_thread_info)*this->max_th_num);
return this;
}
/**
* member function reality. thread pool init function.
* para:
* this: thread pool struct instance ponter
* return:
* true: successful; false: failed
*/
TPBOOL tp_init(tp_thread_pool *this){
int i;
int err;
//creat work thread and init work thread info
for(i=0;i<this->min_th_num;i++){
pthread_cond_init(&this->thread_info[i].thread_cond, NULL);
pthread_mutex_init(&this->thread_info[i].thread_lock, NULL);
err = pthread_create(&this->thread_info[i].thread_id, NULL, tp_work_thread, this);
if(0 != err){
printf("tp_init: creat work thread failed/n");
return FALSE;
}
printf("tp_init: creat work thread %d/n", this->thread_info[i].thread_id);
}
//creat manage thread
err = pthread_create(&this->manage_thread_id, NULL, tp_manage_thread, this);
if(0 != err){
printf("tp_init: creat manage thread failed/n");
return FALSE;
}
printf("tp_init: creat manage thread %d/n", this->manage_thread_id);
return TRUE;
}
/**
* member function reality. thread pool entirely close function.
* para:
* this: thread pool struct instance ponter
* return:
*/
void tp_close(tp_thread_pool *this){
int i;
//close work thread
for(i=0;i<this->cur_th_num;i++){
kill(this->thread_info[i].thread_id, SIGKILL);
pthread_mutex_destroy(&this->thread_info[i].thread_lock);
pthread_cond_destroy(&this->thread_info[i].thread_cond);
printf("tp_close: kill work thread %d/n", this->thread_info[i].thread_id);
}
//close manage thread
kill(this->manage_thread_id, SIGKILL);
pthread_mutex_destroy(&this->tp_lock);
printf("tp_close: kill manage thread %d/n", this->manage_thread_id);
//free thread struct
free(this->thread_info);
}
/**
* member function reality. main interface opened.
* after getting own worker and job, user may use the function to process the task.
* para:
* this: thread pool struct instance ponter
* worker: user task reality.
* job: user task para
* return:
*/
void tp_process_job(tp_thread_pool *this, tp_work *worker, tp_work_desc *job){
int i;
int tmpid;
//fill this->thread_info's relative work key
for(i=0;i<this->cur_th_num;i++){
pthread_mutex_lock(&this->thread_info[i].thread_lock);
if(!this->thread_info[i].is_busy){
printf("tp_process_job: %d thread idle, thread id is %d/n", i, this->thread_info[i].thread_id);
//thread state be set busy before work
this->thread_info[i].is_busy = TRUE;
pthread_mutex_unlock(&this->thread_info[i].thread_lock);
this->thread_info[i].th_work = worker;
this->thread_info[i].th_job = job;
printf("tp_process_job: informing idle working thread %d, thread id is %d/n", i, this->thread_info[i].thread_id);
pthread_cond_signal(&this->thread_info[i].thread_cond);
return;
}
else
pthread_mutex_unlock(&this->thread_info[i].thread_lock);
}//end of for
//if all current thread are busy, new thread is created here
pthread_mutex_lock(&this->tp_lock);
if( this->add_thread(this) ){
i = this->cur_th_num - 1;
tmpid = this->thread_info[i].thread_id;
this->thread_info[i].th_work = worker;
this->thread_info[i].th_job = job;
}
pthread_mutex_unlock(&this->tp_lock);
//send cond to work thread
printf("tp_process_job: informing idle working thread %d, thread id is %d/n", i, this->thread_info[i].thread_id);
pthread_cond_signal(&this->thread_info[i].thread_cond);
return;
}
/**
* member function reality. get real thread by thread id num.
* para:
* this: thread pool struct instance ponter
* id: thread id num
* return:
* seq num in thread info struct array
*/
int tp_get_thread_by_id(tp_thread_pool *this, int id){
int i;
for(i=0;i<this->cur_th_num;i++){
if(id == this->thread_info[i].thread_id)
return i;
}
return -1;
}
/**
* member function reality. add new thread into the pool.
* para:
* this: thread pool struct instance ponter
* return:
* true: successful; false: failed
*/
static TPBOOL tp_add_thread(tp_thread_pool *this){
int err;
tp_thread_info *new_thread;
if( this->max_th_num <= this->cur_th_num )
return FALSE;
//malloc new thread info struct
new_thread = &this->thread_info[this->cur_th_num];
//init new thread's cond & mutex
pthread_cond_init(&new_thread->thread_cond, NULL);
pthread_mutex_init(&new_thread->thread_lock, NULL);
//init status is busy
new_thread->is_busy = TRUE;
//add current thread number in the pool.
this->cur_th_num++;
err = pthread_create(&new_thread->thread_id, NULL, tp_work_thread, this);
if(0 != err){
free(new_thread);
return FALSE;
}
printf("tp_add_thread: creat work thread %d/n", this->thread_info[this->cur_th_num-1].thread_id);
return TRUE;
}
/**
* member function reality. delete idle thread in the pool.
* only delete last idle thread in the pool.
* para:
* this: thread pool struct instance ponter
* return:
* true: successful; false: failed
*/
static TPBOOL tp_delete_thread(tp_thread_pool *this){
//current thread num can't < min thread num
if(this->cur_th_num <= this->min_th_num) return FALSE;
//if last thread is busy, do nothing
if(this->thread_info[this->cur_th_num-1].is_busy) return FALSE;
//kill the idle thread and free info struct
kill(this->thread_info[this->cur_th_num-1].thread_id, SIGKILL);
pthread_mutex_destroy(&this->thread_info[this->cur_th_num-1].thread_lock);
pthread_cond_destroy(&this->thread_info[this->cur_th_num-1].thread_cond);
//after deleting idle thread, current thread num -1
this->cur_th_num--;
return TRUE;
}
/**
* member function reality. get current thread pool status:idle, normal, busy, .etc.
* para:
* this: thread pool struct instance ponter
* return:
* 0: idle; 1: normal or busy(don't process)
*/
static int tp_get_tp_status(tp_thread_pool *this){
float busy_num = 0.0;
int i;
//get busy thread number
for(i=0;i<this->cur_th_num;i++){
if(this->thread_info[i].is_busy)
busy_num++;
}
//0.2? or other num?
if(busy_num/(this->cur_th_num) < BUSY_THRESHOLD)
return 0;//idle status
else
return 1;//busy or normal status
}
/**
* internal interface. real work thread.
* para:
* pthread: thread pool struct ponter
* return:
*/
static void *tp_work_thread(void *pthread){
pthread_t curid;//current thread id
int nseq;//current thread seq in the this->thread_info array
tp_thread_pool *this = (tp_thread_pool*)pthread;//main thread pool struct instance
//get current thread id
curid = pthread_self();
//get current thread's seq in the thread info struct array.
nseq = this->get_thread_by_id(this, curid);
if(nseq < 0)
return;
printf("entering working thread %d, thread id is %d/n", nseq, curid);
//wait cond for processing real job.
while( TRUE ){
pthread_mutex_lock(&this->thread_info[nseq].thread_lock);
pthread_cond_wait(&this->thread_info[nseq].thread_cond, &this->thread_info[nseq].thread_lock);
pthread_mutex_unlock(&this->thread_info[nseq].thread_lock);
printf("%d thread do work!/n", pthread_self());
tp_work *work = this->thread_info[nseq].th_work;
tp_work_desc *job = this->thread_info[nseq].th_job;
//process
work->process_job(work, job);
//thread state be set idle after work
pthread_mutex_lock(&this->thread_info[nseq].thread_lock);
this->thread_info[nseq].is_busy = FALSE;
pthread_mutex_unlock(&this->thread_info[nseq].thread_lock);
printf("%d do work over/n", pthread_self());
}
}
/**
* internal interface. manage thread pool to delete idle thread.
* para:
* pthread: thread pool struct ponter
* return:
*/
static void *tp_manage_thread(void *pthread){
tp_thread_pool *this = (tp_thread_pool*)pthread;//main thread pool struct instance
//1?
sleep(MANAGE_INTERVAL);
do{
if( this->get_tp_status(this) == 0 ){
do{
if( !this->delete_thread(this) )
break;
}while(TRUE);
}//end for if
//1?
sleep(MANAGE_INTERVAL);
}while(TRUE);
}
四. 數據庫連接池介紹
數據庫連接是一種關鍵的有限的昂貴的資源,這一點在多用戶的網頁應用程序中體現得尤為突出。
一個數據庫連接對象均對應一個物理數據庫連接,每次操作都打開一個物理連接,使用完都關閉連接,這樣造成系統的 性能低下。 數據庫連接池的解決方案是在應用程序啟動時建立足夠的數據庫連接,并講這些連接組成一個連接池(簡單說:在一個“池”里放了好多半成品的數據庫聯接對象),由應用程序動態地對池中的連接進行申請、使用和釋放。對于多于連接池中連接數的并發請求,應該在請求隊列中排隊等待。并且應用程序可以根據池中連接的使用率,動態增加或減少池中的連接數。 連接池技術盡可能多地重用了消耗內存地資源,大大節省了內存,提高了服務器地服務效率,能夠支持更多的客戶服務。通過使用連接池,將大大提高程序運行效率,同時,我們可以通過其自身的管理機制來監視數據庫連接的數量、使用情況等。
1) 最小連接數是連接池一直保持的數據庫連接,所以如果應用程序對數據庫連接的使用量不大,將會有大量的數據庫連接資源被浪費;
2) 最大連接數是連接池能申請的最大連接數,如果數據庫連接請求超過此數,后面的數據庫連接請求將被加入到等待隊列中,這會影響之后的數據庫操作。