亚洲香蕉成人av网站在线观看_欧美精品成人91久久久久久久_久久久久久久久久久亚洲_热久久视久久精品18亚洲精品_国产精自产拍久久久久久_亚洲色图国产精品_91精品国产网站_中文字幕欧美日韩精品_国产精品久久久久久亚洲调教_国产精品久久一区_性夜试看影院91社区_97在线观看视频国产_68精品久久久久久欧美_欧美精品在线观看_国产精品一区二区久久精品_欧美老女人bb

首頁 > 學院 > 開發設計 > 正文

分詞-TFIDF-特征降維(信息增益)

2019-11-15 00:35:45
字體:
來源:轉載
供稿:網友
分詞-TFIDF-特征降維(信息增益)

前提:首先說明一下TFIDF的部分是借用

http://www.49028c.com/ywl925/archive/2013/08/26/3275878.html

這篇博文寫的代碼,因為工作需要在后面加上了使用信息增益的方法進行特征降維。

TFIDF的介紹在此就不贅述了,直接將公式擺出來。

TF公式:

 /mathrm{tf_{i,j}} = /frac{n_{i,j}}{/sum_k n_{k,j}}

以上式子中n_{i,j}是該詞在文件d_{j}中的出現次數,而分母則是在文件d_{j}中所有字詞的出現次數之和。

IDF公式:

 /mathrm{idf_{i}} =  /log /frac{|D|}{|/{j: t_{i} /in d_{j}/}|}

  • |D|:語料庫中的文件總數
  •  |/{ j: t_{i} /in d_{j}/}| :包含詞語 t_{i} 的文件數目(即 n_{i,j} /neq 0的文件數目)如果該詞語不在語料庫中,就會導致被除數為零,因此一般情況下使用1 + |/{j: t_{i} /in d_{j}/}|

然后

 /mathrm{tf{}idf_{i,j}} = /mathrm{tf_{i,j}} /times  /mathrm{idf_{i}}

信息增益

其公式為:

假如有變量X,其可能的取值有n種,每一種取到的概率為Pi,那么X的熵就定義為

熵公式

也就是說X可能的變化越多,X所攜帶的信息量越大,熵也就越大。對于文本分類或聚類而言,就是說文檔屬于哪個類別的變化越多,類別的信息量就越大。所以特征T給聚類C或分類C帶來的信息增益為

IG(T)=H(C)-H(C|T)

H(C|T)包含兩種情況:一種是特征T出現,標記為t,一種是特征T不出現,標記為t'。所以

H(C|T)=P(t)H(C|t)+P(t')H(C|t‘)

本例屬于文本分類其p(t)為該詞在所有分類中出現的概率,H(C|t)該詞出現的條件下分類的熵。

本例的數據為自行搜索的不良信息中的兩類,暴力和反動。提供兩種篩選方式一種設立閾值另一種是進行排序后選取前多少個為特征值。

涉及的文件

停用詞表和分詞jar包:http://files.VEVb.com/files/mansiisnam/%E6%96%87%E4%BB%B6.zip

代碼如下

package TIDF;import java.io.*;import java.util.*;import org.wltea.analyzer.lucene.IKAnalyzer;/** * 分詞-TFIDF-信息增益 * @author LJ *  * @datetime 2015-6-15  */public class TestTfIdf {    public static final String stopWordTable = "C:/Users/zzw/Desktop/sc_ot-tingyongzhongwen_hc/stopWordTable.txt"; // 加載停用詞庫    PRivate static ArrayList<String> FileList = new ArrayList<String>(); // 文件列表    // 遞歸讀取該路徑下文件返回文件列表    public static List<String> readDirs(String filepath)            throws FileNotFoundException, IOException {        try {            File file = new File(filepath);            if (!file.isDirectory()) {                System.out.println("輸入的[]");                System.out.println("filepath:" + file.getAbsolutePath());            } else {                String[] flist = file.list();                for (int i = 0; i < flist.length; i++) {                    File newfile = new File(filepath + "http://" + flist[i]);                    if (!newfile.isDirectory()) {                        FileList.add(newfile.getAbsolutePath());                    } else if (newfile.isDirectory()) {                        readDirs(filepath + "http://" + flist[i]);                    }                }            }        } catch (FileNotFoundException e) {            System.out.println(e.getMessage());        }        return FileList;    }    // 讀入文件    public static String readFile(String file) throws FileNotFoundException,            IOException {        StringBuffer strSb = new StringBuffer();        InputStreamReader inStrR = new InputStreamReader(new FileInputStream(                file), "gbk");        BufferedReader br = new BufferedReader(inStrR);        String line = br.readLine();        while (line != null) {            strSb.append(line).append("/r/n");            line = br.readLine();        }        return strSb.toString();    }    // 分詞處理    public static ArrayList<String> cutWords(String file) throws IOException {        ArrayList<String> fenci = new ArrayList<String>();        ArrayList<String> words = new ArrayList<String>();        String text = TestTfIdf.readFile(file);        IKAnalyzer analyzer = new IKAnalyzer();        fenci = analyzer.split(text); // 分詞處理        BufferedReader StopWordFileBr = new BufferedReader(                new InputStreamReader(new FileInputStream(new File(                        stopWordTable))));        // 用來存放停用詞的集合        Set<String> stopWordSet = new HashSet<String>();        // 初如化停用詞集        String stopWord = null;        for (; (stopWord = StopWordFileBr.readLine()) != null;) {            stopWordSet.add(stopWord);        }        for (String word : fenci) {            if (stopWordSet.contains(word)) {                continue;            }            words.add(word);        }        System.out.println(words);        return words;    }    // 統計一個文件中每個詞出現的次數    public static HashMap<String, Integer> normalTF(ArrayList<String> cutwords) {        HashMap<String, Integer> resTF = new HashMap<String, Integer>();        for (String word : cutwords) {            if (resTF.get(word) == null) {                resTF.put(word, 1);                System.out.println(word);            } else {                resTF.put(word, resTF.get(word) + 1);                System.out.println(word.toString());            }        }        System.out.println(resTF);        return resTF;    }    // 計算一個文件每個詞tf值    @SuppressWarnings("unchecked")    public static HashMap<String, Float> tf(ArrayList<String> cutwords) {        HashMap<String, Float> resTF = new HashMap<String, Float>();        int wordLen = cutwords.size();        HashMap<String, Integer> intTF = TestTfIdf.normalTF(cutwords);        Iterator iter = intTF.entrySet().iterator();        while (iter.hasNext()) {            Map.Entry entry = (Map.Entry) iter.next();            resTF.put(entry.getKey().toString(), Float.parseFloat(entry                    .getValue().toString())                    / wordLen);            System.out.println(entry.getKey().toString() + " = "                    + Float.parseFloat(entry.getValue().toString()) / wordLen);        }        return resTF;    }    // tf times for file 。。。。。。。    public static HashMap<String, HashMap<String, Integer>> normalTFAllFiles(            String dirc) throws IOException {        FileList.clear();        HashMap<String, HashMap<String, Integer>> allNormalTF = new HashMap<String, HashMap<String, Integer>>();        List<String> filelist = TestTfIdf.readDirs(dirc);        for (String file : filelist) {            HashMap<String, Integer> dict = new HashMap<String, Integer>();            ArrayList<String> cutwords = TestTfIdf.cutWords(file);            dict = TestTfIdf.normalTF(cutwords);            allNormalTF.put(file, dict);        }        return allNormalTF;    }    // 返回所有文件tf值    public static HashMap<String, HashMap<String, Float>> tfAllFiles(String dirc)            throws IOException {        FileList.clear();        HashMap<String, HashMap<String, Float>> allTF = new HashMap<String, HashMap<String, Float>>();        List<String> filelist = TestTfIdf.readDirs(dirc);        for (String file : filelist) {            HashMap<String, Float> dict = new HashMap<String, Float>();            ArrayList<String> cutwords = TestTfIdf.cutWords(file);            dict = TestTfIdf.tf(cutwords);            allTF.put(file, dict);        }        return allTF;    }    // 計算該目錄下所有詞的idf    @SuppressWarnings("unchecked")    public static HashMap<String, Float> idf(            HashMap<String, HashMap<String, Float>> all_tf, String file)            throws IOException {        FileList.clear();        HashMap<String, Float> resIdf = new HashMap<String, Float>();        HashMap<String, Integer> dict = new HashMap<String, Integer>();        int docNum = readDirs(file).size();        for (int i = 0; i < docNum; i++) {            HashMap<String, Float> temp = all_tf.get(FileList.get(i));            Iterator iter = temp.entrySet().iterator();            while (iter.hasNext()) {                Map.Entry entry = (Map.Entry) iter.next();                String word = entry.getKey().toString();                if (dict.get(word) == null) {                    dict.put(word, 1);                } else {                    dict.put(word, dict.get(word) + 1);                }            }        }        // 生成文件記錄所有詞和包含該詞的文件數        StringBuilder sb1 = new StringBuilder();        Iterator iter1 = dict.entrySet().iterator();        while (iter1.hasNext()) {            Map.Entry entry = (Map.Entry) iter1.next();            if (entry.getKey().toString() != null) {                sb1.append(entry.getKey().toString() + " "                        + dict.get(entry.getKey()) + "/r/n");            }        }        File filewriter = new File("E:/allCount.txt");        FileWriter fw = new FileWriter(filewriter.getAbsoluteFile());        BufferedWriter bb = new BufferedWriter(fw);        bb.write(sb1.toString());        bb.close();        System.out.println(dict);        // 計算idf        System.out.println("IDF for every word is:");        Iterator iter_dict = dict.entrySet().iterator();        while (iter_dict.hasNext()) {            Map.Entry entry = (Map.Entry) iter_dict.next();            float value = (float) Math.log(docNum                    / Float.parseFloat(entry.getValue().toString()));            resIdf.put(entry.getKey().toString(), value);            System.out.println(entry.getKey().toString() + " = " + value);        }        return resIdf;    }    // 返回該目錄下所有詞以及包含詞的文件數    @SuppressWarnings("unchecked")    public static HashMap<String, Integer> idf_dict(            HashMap<String, HashMap<String, Float>> all_tf, String file)            throws IOException {        FileList.clear();        HashMap<String, Integer> dict = new HashMap<String, Integer>();        List<String> filelist = readDirs(file);        int docNum = filelist.size();        for (int i = 0; i < docNum; i++) {            HashMap<String, Float> temp = all_tf.get(filelist.get(i));            Iterator iter = temp.entrySet().iterator();            while (iter.hasNext()) {                Map.Entry entry = (Map.Entry) iter.next();                String word = entry.getKey().toString();                if (dict.get(word) == null) {                    dict.put(word, 1);                } else {                    dict.put(word, dict.get(word) + 1);                }            }        }        System.out.println(dict);        return dict;    }    // 計算TFIDF值    @SuppressWarnings("unchecked")    public static void tf_idf(HashMap<String, HashMap<String, Float>> all_tf,            HashMap<String, Float> idfs, String file) throws IOException {        HashMap<String, HashMap<String, Float>> resTfIdf = new HashMap<String, HashMap<String, Float>>();        FileList.clear();        int docNum = readDirs(file).size();        for (int i = 0; i < docNum; i++) {            String filepath = FileList.get(i);            HashMap<String, Float> tfidf = new HashMap<String, Float>();            HashMap<String, Float> temp = all_tf.get(filepath);            Iterator iter = temp.entrySet().iterator();            while (iter.hasNext()) {                Map.Entry entry = (Map.Entry) iter.next();                String word = entry.getKey().toString();                Float value = (float) Float.parseFloat(entry.getValue()                        .toString())                        * idfs.get(word);                tfidf.put(word, value);            }            resTfIdf.put(filepath, tfidf);        }        System.out.println("TF-IDF for Every file is :");        DisTfIdf(resTfIdf); // 顯示TFIDF    }    // 返回計算的TFIDF值    @SuppressWarnings("unchecked")    public static HashMap<String, HashMap<String, Float>> tf_idf_return(            HashMap<String, HashMap<String, Float>> all_tf,            HashMap<String, Float> idfs, String file) throws IOException {        FileList.clear();        HashMap<String, HashMap<String, Float>> resTfIdf = new HashMap<String, HashMap<String, Float>>();        int docNum = readDirs(file).size();        for (int i = 0; i < docNum; i++) {            @SuppressWarnings("unused")            HashMap<String, Float> tfidf_reduce = new HashMap<String, Float>();            String filepath = FileList.get(i);            HashMap<String, Float> tfidf = new HashMap<String, Float>();            HashMap<String, Float> temp = all_tf.get(filepath);            Iterator iter = temp.entrySet().iterator();            while (iter.hasNext()) {                Map.Entry entry = (Map.Entry) iter.next();                String word = entry.getKey().toString();                Float value = (float) Float.parseFloat(entry.getValue()                        .toString())                        * idfs.get(word);                tfidf.put(word, value);            }            resTfIdf.put(filepath, tfidf);        }        return resTfIdf;    }    // TFIDF顯示輸出 并建立文件存儲該信息    @SuppressWarnings("unchecked")    public static void DisTfIdf(HashMap<String, HashMap<String, Float>> tfidf)            throws IOException {        StringBuilder stall = new StringBuilder();        Iterator iter1 = tfidf.entrySet().iterator();        while (iter1.hasNext()) {            Map.Entry entrys = (Map.Entry) iter1.next();            System.out.println("FileName: " + entrys.getKey().toString());            System.out.print("{");            HashMap<String, Float> temp = (HashMap<String, Float>) entrys                    .getValue();            Iterator iter2 = temp.entrySet().iterator();            while (iter2.hasNext()) {                Map.Entry entry = (Map.Entry) iter2.next();                System.out.print(entry.getKey().toString() + " = "                        + entry.getValue().toString() + ", ");                stall.append(entrys.getKey().toString() + " "                        + entry.getKey().toString() + " "                        + entry.getValue().toString() + "/r/n");            }            System.out.println("}");        }        File filewriter = new File("E:/allTFIDF.txt");        FileWriter fw = new FileWriter(filewriter.getAbsoluteFile());        BufferedWriter bz = new BufferedWriter(fw);        bz.write(stall.toString());        bz.close();    }    // 單屬性熵    public static double Entropy(double[] p, double tot) {        double entropy = 0.0;        for (int i = 0; i < p.length; i++) {            if (p[i] > 0.0) {                entropy += -p[i] / tot * Math.log(p[i] / tot) / Math.log(2.0);            }        }        return entropy;    }    // 信息增益特征降維    @SuppressWarnings("unchecked")    private static void Total(int N,            HashMap<String, HashMap<String, Float>> result,            HashMap<String, Integer> idfs_dict_neg,            HashMap<String, Integer> idfs_dict_pos, String file)            throws IOException {        FileList.clear();        double[] classCnt = new double[N]; // 類別數組        double totalCnt = 0.0; // 總文件數        for (int c = 0; c < N; c++) {            classCnt[c] = 125; // 每個類別的文件數目            totalCnt += classCnt[c];        }        int docNum = readDirs(file).size();        int num = 0; // 詞f的編號        int numb = 0; // 詞f的編號        double totalEntroy = Entropy(classCnt, totalCnt); // 總的熵        HashMap<String, Integer> count = new HashMap<String, Integer>();// 存儲詞及其編號        HashMap<String, Integer> countG = new HashMap<String, Integer>();// 存儲特征降維后word和其編號        HashMap<String, Double> countG1 = new HashMap<String, Double>();// 存儲特征降維后word和其信息增益        HashMap<String, Double> infogains = new HashMap<String, Double>();// 存儲詞和該詞的信息增益        StringBuilder st = new StringBuilder();// 緩存文件名,詞,信息增益,TFIDF        StringBuilder ss = new StringBuilder();// 緩存未特征處理的類別,單詞的編號,單詞的TFIDF值        StringBuilder sr = new StringBuilder();// 緩存經過特征處理后的類別,單詞的編號,單詞的TFIDF值        for (int i = 0; i < docNum; i++) {            String filepath = FileList.get(i);            HashMap<String, Float> temp = result.get(filepath);            Iterator iter = temp.entrySet().iterator();            if (filepath.contains("dubo")) {                ss.append(1 + "  "); // 將賭博類定義為類別1            } else if (filepath.contains("fangdong")) {                ss.append(2 + "  "); // 將反動類定義為類別2            }            while (iter.hasNext()) {                Map.Entry entry = (Map.Entry) iter.next();                String f = entry.getKey().toString();                double[] featureCntWithF = new double[N]; // 包括詞F的分布(類別1,2分別包含該詞的文件數)                double[] featureCntWithoutF = new double[N]; // 不包括詞F的分布                double totalCntWithF = 0.0; // 所有類別中包括詞F的文件數                double totalCntWithoutF = 0.0; // 所有類別中不包括詞F的文件數                for (int c = 0; c < N; c++) {                    Iterator iter_dict = null;                    switch (c) {                    case 0:                        iter_dict = idfs_dict_neg.entrySet().iterator();                        break;                    case 1:                        iter_dict = idfs_dict_pos.entrySet().iterator();                        break;                    }                    while (iter_dict.hasNext()) {                        Map.Entry entry_neg = (Map.Entry) iter_dict.next();                        if (f.equals(entry_neg.getKey().toString())) { // 該詞在該類別中出現                            featureCntWithF[c] = Double.parseDouble(entry_neg                                    .getValue().toString()); // 將該出現該詞的文件數賦值給數組                            break;                        } else {                            featureCntWithF[c] = 0.0;                        }                    }                    featureCntWithoutF[c] = classCnt[c] - featureCntWithF[c]; // 不包括詞F的文件數等于該類別總數減去包含該詞的文件數                    totalCntWithF += featureCntWithF[c];                    totalCntWithoutF += featureCntWithoutF[c];                }                double entropyWithF = Entropy(featureCntWithF, totalCntWithF);                double entropyWithoutF = Entropy(featureCntWithoutF,                        totalCntWithoutF);                double wf = totalCntWithF / totalCnt;                double infoGain = totalEntroy - wf * entropyWithF - (1.0 - wf) // 信息增益的公式                        * entropyWithoutF;                infogains.put(f, infoGain);                st.append(filepath + " " + f + " " + "信息增益" + "="                        + infoGain // 緩存格式                        + " " + "tfidf" + "=" + entry.getValue().toString()                        + "/r/n");                // }                // 方式一:直接用閾值選取特征值可以省去下面再次遍歷的過程                // if(infogains.get(f)>0.004011587943125061){                // 給詞f編號                if (count.get(f) == null) {                    num++;                    count.put(f, num);                }                ss.append(count.get(f) + ":" + entry.getValue() + " "); // 緩存格式                // }            }            ss.append("/r/n");        }        File fileprepare = new File("E:/test.txt");        FileWriter fz = new FileWriter(fileprepare.getAbsoluteFile());        BufferedWriter bz = new BufferedWriter(fz);        bz.write(ss.toString());        bz.close();        File filewriter = new File("E:/jieguo.txt");        FileWriter fw = new FileWriter(filewriter.getAbsoluteFile());        BufferedWriter bw = new BufferedWriter(fw);        bw.write(st.toString());        bw.close();        // 方式二:將信息增益從大到小排列,選取前特定數的詞為特征詞        // 對信息增益排序(從大到?。?       ArrayList<Map.Entry<String, Double>> infoIds = new ArrayList<Map.Entry<String, Double>>(                infogains.entrySet());        Collections.sort(infoIds, new Comparator<Map.Entry<String, Double>>() {            public int compare(Map.Entry<String, Double> o1,                    Map.Entry<String, Double> o2) {                if (o2.getValue() - o1.getValue() > 0) {                    return 1; // 降序排列                } else {                    return -1;                }            }        });        // 選取信息增益為前2000的詞做特征詞        for (int c = 0; c < 2000; c++) {            countG1.put(infoIds.get(c).getKey(), infoIds.get(c).getValue()); // 將處理后的數據存儲到countG1中        }        // 再次遍歷        for (int i = 0; i < docNum; i++) {            String filepath = FileList.get(i);            HashMap<String, Float> temp = result.get(filepath);            Iterator iter = temp.entrySet().iterator();            if (filepath.contains("dubo")) {                sr.append(1 + "  ");            } else if (filepath.contains("fangdong")) {                sr.append(2 + "  ");            }            while (iter.hasNext()) {                Map.Entry entry = (Map.Entry) iter.next();                // for(Iterator<Feature>                // i=index.featureIterator();i.hasNext();){                String f = entry.getKey().toString();                // 判斷該詞是特征降維后的那些詞                if (countG1.get(f) != null) {                    // 給該詞編號                    if (countG.get(f) == null) {                        numb++;                        countG.put(f, numb);                    }                    sr.append(countG.get(f) + ":" + entry.getValue() + " ");                }            }            sr.append("/r/n");        }        File fileprepare1 = new File("E:/testt.txt");        FileWriter fr = new FileWriter(fileprepare1.getAbsoluteFile());        BufferedWriter br = new BufferedWriter(fr);        br.write(sr.toString());        br.close();    }    public static void main(String[] args) throws IOException {        // TODO Auto-generated method stub        String file = "C:/Users/zzw/Desktop/項目管理/語料/test"; // 總的數據路徑        String file1 = "C:/Users/zzw/Desktop/項目管理/語料/test/賭博"; // 類1數據路徑        String file2 = "C:/Users/zzw/Desktop/項目管理/語料/test/反動"; // 類2數據路徑        HashMap<String, HashMap<String, Float>> all_tf = tfAllFiles(file);        HashMap<String, HashMap<String, Float>> all_tf_neg = tfAllFiles(file1); // file1文件的tf值和路徑        HashMap<String, HashMap<String, Float>> all_tf_pos = tfAllFiles(file2); // file2文件的tf值和路徑        System.out.println();        HashMap<String, Integer> idfs_dict_neg = idf_dict(all_tf_neg, file1); // 返回file1下所有詞以及包含詞的文件數        HashMap<String, Integer> idfs_dict_pos = idf_dict(all_tf_pos, file2); // 返回file2下所有詞以及包含詞的文件數        HashMap<String, Float> idfs = idf(all_tf, file);        System.out.println();        tf_idf(all_tf, idfs, file);        HashMap<String, HashMap<String, Float>> result = tf_idf_return(all_tf,                idfs, file);        int N = 2; // 輸入類別數        /*         * 信息增益公式 IG(T)=H(C)-H(C|T) H(C|T)=P(t)H(C|t)+P(t')H(C|t‘)         */        Total(N, result, idfs_dict_neg, idfs_dict_pos, file); // 按信息增益進行特征降維    }}

各個文件結果如下

allCount.txt

rssAjXKuVlTinATvb8cd47doBaBRzGd5rSTKizKtjENnTDcMA9ePATwDSrQSVyv63wrc6TgQWgFoFPPZH9I7QWnyWgk/67TZWLQC0CjmxT214iRk+w+dEEOLVgAa5Y5aSX7KYbUQkQ3QKubzb0StlCO92uUfsKPeICIboG3MF++ilm1Fj1WtSJZ3P/XGF4e1AMBTwrz8VtBK/4FedmupMwAuh3n13d/IsgWAEzGvv/87WgGAEzEP3/+jRiujtTRDACCFphVt53g+TJgfgQu/R+mi0FwBuB7m4b1Srcj7NKW7RLdjQOs3Jracj2wP256tJIDWMA/vf1EWQaJXyjef/b1neSe6fO5OhMuHAFokoRWxFNj7Jqtj/Pu1MK/H0ApAa6S0IrE3MP90rYxDP1iqFYAG2aGVsssJg1D9k7UyT+yxCAJoEU0rYcvV+XLfq3FMQeik8/ypWlnCFdAKQIsEWlk3bLquM10fzaiMQ2f6Ib47KCKMxD5TK2shhFYAWkRdBKW2bgpuDQu1om4wV2vB+eFoBaBF6rRyqzkKziIKWjmlWvGMhlYAWqRQK3M2/uqKfHJkpJVTxmnjPjBaAWgOdcrWu4d57os4qsh6xdXKJoN1Eka42rmAcRy1xRRJTgDNoB41XLSy6cWrX+QW7MaqFedw0W2faH7ujAwnqhWAFjGvE1rp++21jRKp032RWSuufOIbyNJ3khWAVgBaxLxS8lbm1UZqJzdphVtlktm0KToPrXPYSwBwB8zL70StOG+sciv7tKhHrBf8YHw9oF8Zu8sRXvZMzQLQDublt8l0uMwW8Pp6C9ZYNn/SdwmxkAG4GuaLd1qW7d5ThR6nXWIIAE8F87mqFQCAPZgXaAUATsW8+Ca6J+jGmesX9mwAng8JrWTbqUu/tqzjSm8W4JmQ1MpWsGybzM70/DrcH1/yEW4AqyAagMthPg+04gmhG8ZtNmU+2lN26FifdXGwvRTpUgybTABtIl3tPgWzJt5O88la2VGsOPUSWgFoEOlq92nyOiF+AP/ja2V7Gq0AtIh6tfu6xLBLcu08NRu7QIpSKe2toBWA6xFc7Z7TQW/lwJPw/RarlXP3gtAKQKOYB+UE8zRNQmkyjkPvZiV0XiXjfJ+8CDpTBWgFoFEqteK7RO+NFLZNjgQjoBWARlHT4aZpcuyw5RcUaWUNcSpqrezLuUUrAI0SVSuyC25ZtoP1M2o1rcR/U7Q1VAVaAWgUeRG03Ngx62GO3V/+ZouRVLUiBN1q0db7W7hoBaBRzGshHW6uWOYR224YtaQ4TSuKVcTdomFEKwAXw7yKteIdANp6K/MAvz99K2kltoq8s3zQC2gFoFHMqyh00pmq3cQxDp0fpD9pWomO6mjjKseHbDmoCNAi5uW3/pStd3FgeDWQ7wvRC+H9zGpRsbuJG7dpKFoAWiLQyiKF7dWNLiTcELTi31Yo1SnuThPFBsAViaoVh9UR+sxseDxocJZPCW0QFgdwYVJaAQDYQe6eIACASoSdIJkjx49Z8wA8J+q0si8icjaS3T34BgBPiiqteAWHv+mTfpItH4BnxF6tJG51D/ail/NFEocm4ojIBmgTRSvZTIPbEaH4Ko8pnFfrBpv+sFo3EJEN0Db7F0E65TXIkWqFM0EAjWJeFc6taFoJh/UntALwzAm0UnwboUc4a7vJIu5/eA1ctAJwQaQQFe01HkdNK0p8U91dZjJoBaBRzGvlnqCw0qgIXQq04ovD+wJaAbggYjrcjH9acGfLFq0APDv0Cz2CaX1hHK5TBlfQCsCzRr/QI3hto2buGsYvPYlWAJ4v5s17WStzr3V7b/cvghIDcGgF4IKYN+9/kbRye+G3SOx9WskM2KMVgAti3vwgacVJ2L+9vq1N2RKRDdAqola8azsmZyVT9hoXy2JfhgsR2QBtI2hFCMOepik9ges9kdOK5wWUAHA1Qq2EhcoeCj+DHBaAa2Ie5JYtAMBO0AoAnEylVnbu6h67FxUAnhTmQRmH27C9ccfbCs8bCh9CdxbgWaAP7y/Y3ph+WNIlSwJkt7sNvY+Rzw9ZXANwMczrtFai0ZL8/k11FFR1FePtULO6AmiMjFaktUtm/1he7ojFijYhk0I4WI1ZAJrCvNaCESb1HFBqjl8LkbN9VJYUXzQUfE6cYknbBqAlzCtVKwl71E/dx3tBYgcmh+QQusEAbWFeKelwcVDKELRYUosPOcI2OfJfxNy38SXC7jVAWyhaidcaYf5K7mLDVHrTNJ04uo9VABpD0orQk1UaGOrR5g+mFaQC0ByRVsR9nkT7YhSH44oWQYd9wGFFgBZReysOO3qiH6BaoU4BaJS8VvzXdxz6knf53lo5I78BAO5DTitxTVCUPnnXRRB1CkDTJLWiX+eeWxXdr1oRn9s1AwMA9yGhldRCQxiaTXNShSGedGRFBNAUslZKTwuWv83naEVbW7EmAmgIrVqRxll96goW+iEAz4aSDWYF21csPdAKwLPhgFYAACSSwQgAAPWYhwqtCAmRdaGR+64xBIAnRZVW5A2ick/4sbgAcE3Mw/tIK4I95uZsPIqWHRlxd5Ti7WEqF4ALYt7IWtlk4bgkr5XIHPP3W2vDvSDbm26w+yoXIrIBWkZaBB3QivP1aFbfj4DaawMisgFa5z5aCa0yDn3Xuy5Yw7FL7h1K/0AisgEaw7z+7ucarRQN8Af6sX03jIkzzRXFBhHZAO1jXr77KduyraxWnApkHPrBjqP/ne7HVE7fEpEN0D7mi29+PHkRFNYlt67tKVoRwCoAjWFefP3XYq0Il/3EWrG9U99sf68vgg5JAakANIf57Ou/nKmVcegH6z2xaeX0aoWIbIAWMZ9+JWpFqifkIVt/bmUYxkBE99IKdQpAo5hPv/xztlrZhmyFXRhvHMXaKaxv4mdP0AqBcADtktdK8svaTlC0bBqHruvkyLlqrVCnADSNtAiSEV9mTSuhNTwlHapWiMgGaB2pZSuhvP7aHYjBIsh3wbG5FSKyARpH2mCO0K8GymlF/K79WiEiG+AJII3D+cSrDn+Rky0UoosQx6HrBrt+CJUGwLWQhvdXvMk28a+zZUL6e2oPGgLAE0A6aggAcICq0EkAgDxSOtw0pXKWnLCUcmy/t68adWZ2/OjqhRanAgAOYN7+8A+5WtFMMDdsq9662yP9vmzswwFwcUhL7r6ATFNp+Sb6QgAS5u0PvyiLIO32jeoJtpugDryG+atb87+B80vP+9T6x4Ub6okrqTELQERCK9o/6pVacaqeA3NrdTc+T+HgrVd63cqfOV5K+WH5UuXYsAx7YHBhzNs/6lqZxJTHqjcqSIQ8cP/YGFogszjyi6Nxy/jPLqqcYkV5+/cnfDsDfWgFrkqoFWmOte/Df/kLXypxOjdrA229IY7WJjq6yl+pP9/9yd6xyDD6we5pWsc/C63AVUlXK+L/+wu1ok/8r29w+YuV+rDUQ8o5yORHBZngQiD38V0itAJXJqUV5V12tKK2UnUNrK911R1iu7QiLVYKrv9wOkCCQc9JZUArcGV0rQTBS/o6JOrs3t48eTnTDaPT9yh9SXdqjspYCno7foydlDDjHsbeeaE0WoEro2nFFUm8DEi3Ru7wvuzVSrD7VPDbRcerh1yzZ2e6HVqByyJrZRy6ru+7dbHSlbVsq3dNi3dUxEmSwme94MuC1VY6tCH+YrRFVfY7oRW4LJJW5vdwfZNDWWQPJWf+dc//a5/fDcrtAkW/0i2JIfsqO6KTKyR6KwA5Iq2IodYetS/WTTSH3qL9i6C6XyHTWJnQCkCeQCv+petnaOXwiZ70L1P+O5SqbSuVxN8arQDkKNsJ8qh5scL93L1THwe0sv0KRQcInYdEt6AVgBx31Yr8EQWjI+W/TIZQY2VzeNtPi010ilYqBAfw9DisFfUNSWWWVJ9I1q8uSv8MZVIvtSQKf1Rw8PqgVuJuNEULXI68VqKtndsukftWhNFOue2gJSE7mb1dTvJ2eRH/7fYGW8TnvN8MEQAk2VGtFJGLNDm8uaN8Ys17HzikbFYOqQDkyAUjAABUglYA4GSkdDghHVI+CpzJQpKWC3u2gbzfo2SNc054AQDsQw6dFN7LQCPj0Jmu79U8I22vx/bG9IN+iibyV3gpfDItUv/11d8S/QCcjZJlK83Gui3N3BnBxI5KOAViwv3beN+ou539Ke2YuseZCjaSMAvAqagXeqTE4e4nS5XBdrJGeK+jPEdvzSVdC+8eKS5aP6VGbs7aySHjGkBDvydomibtn3s3hGQebXEXQ6uQYjPlgxBWiyy6crQiWEWeUNG0coZVyLgGyJHRSkjwb3QcNbJ9w3pqOLMSUYqTxSGbVmzvVzrrmx1FtC1aCUqfM6dOONQDoKLeE+SVBgkz+FbY3vBtceR2OnxlCB1TVStzstSQWC4p88C9s4Q6TwVoBUDF10rwYu5PNAqDcA9rZdaDc6dY7jiQdGHAiWsXtAKgolUr/ltbq5XoEkFVK1pQ7vpXtjfdYG81x1Z7JBc0arT1cEqqwfK/EK0AiCQisgOtFCyBbsxNEP9uUhVdK6uC1g3m0sj+WCu3ryxP7b+V0Pk90QqATLlWSqsV289HnDetJKfXgsPP24u/LYb8XyRo5MqfGfy60aceHoRDKwAqJ1cry2s/P14cdBAlE3hBSsFuTjeMmdATVyvWWn8zye3g7jcLWgFQObdaWVuqSjGgZeVvBc12dVk3DNbGtc44dF2X2Sj2LjnqnS0tIYFqpxvQCoDK+Yug8HFrrXOiyDtaFKY/hZ8sv/fqGy2FW/tdFGGQZf+tZmgFQEa9fsx7OetatpOrlVkdjk08scgrEesXGLEKClPxo7Hc8Bv2d1jQCoBKpBXnkOFWo8gHYMK7lx2iveR48P62YIoDFoJ33f2G5b9LbgmRDhCdE5s/kXENkCK6J6iL/sHf04BwWra+XtxPXpTlrYT0I3z+Siz3i4nOO0ErZFwD5ChLh5PiCpKLobU5siUppHeA1ufk114MjbK9+jn6/w5KDIB7c6fQSSFgbudTYZsn/Fs1SEr+AWgF4O6oRw0BAPaBVgDgZMrzVo4tIHbMtB4/uAMAj4F5+75cKwe2PWxvun6wdZqQhkPizKZcYDYAfFjMm/d/L9PK4VsO46G2XDXiT87ZZYSmeCgPAB4D8/D9HbRSuCEdVR6ZJ/1DiLW/VPy/h4kTgHtgHr7/2yNUK7UP3ybnxkAre4bvybgGuDPm4bufda24YyFlWsnWKQLRpwY/yrvKLIiz3G0GDvUA3Avz+tufVK2s4fnLH8L/ShNlGkRrnvnddhJq3Z/sHASSD1NrMQtFoBWAe5HSip9LEB3JyS8/gs2j8ALVlBY2lwT7zK7dDlgFrQDcDfNK1Urw4oU1SmFYiTOwEq5t4oat9LyTDGVMN9gtcLtqbl/6vdAKwD1QtRKtdOTcafHd9OqI7Q/ZCkfaPO6H2xfnFCg79OuvUXp3qghaAbgXilaE9onUUVHHT8RAlAO7QVtlE67F9n4iWgG4F2JvRdy3VYMn9ai2KIqt7EVWdpPEfvHuigWtANwL8zraYFa2bRUr6Id9wuaHVKyIabVC9PYmD2HzeZcd0ArAvYjG4dRlhfYCu/bw6oxuGIKyo+t76Wr28EfWamXXOgitANyLcHhfP7h3bMq2cvQ/o5V+SZTcfycHWgG4Fx/kqOFtL7p0bL6gWjk8gU/GNcDdyAUj+GMj+5IkxcsPl4WS0uxVJ/yPFU3ip1O0AJxKNsbJuyGw6v3zRv/1bxCvCYqrFfdODzwA0DKFoZOlh/rcUqBoiVF9WhCtALQOWbYAcDJoBQBOJttbibZwK8Ouw41cNnYBrk42Ijvqn9redH151rXtTdc5Jw9tb7rOGb/dd1EZALRLdm5F1Mp2JFk497MDzAJwIaSI7GI3lOSehFoSjzHvuPSDjGuANslGZKeqlRnNCfNxnSHWykEZkHEN0DbpiOwpqxV95s32xvRDpgwps4jTfjG0AtAift6KsPzpbfBV/wtL4KwNXvCKBP6dJ3PQCkCjqBHZUR0RrV4STZUwB0oMi5uO3R6EVgAaJZllW9i0DZkFFFzo497zs34drQBckPKI7LBaUW4UdPdn1jd/+7CbcsbC0OwEaAWgUc7VSvi1Jfdt+bB13XSL40crABdE760ULIJirYzjVN6ujXefa0ArAI0iRGTP7F0ERdh+ybAVc/vRCsDVUMfhTtLKLA43KdJ96MhsHFoBaBRpeH+aJmER1NtodZPTitCy9U9EoxWAC5I7angzSXCFRsnSxV3geKXP5oMjSW9kXAO0SioYIfHmLrWMIoWwktCvLtshBjKuAdpGiXEqKknEiGvx5p5wHK54GQUAT484dLJ0kbN+s1+WiAP98YnC3feDAEDrRFpZLwYCANgFEdkAcDLZiOxpmsTA2bq94cpcbQB4wqg7QXO22yj+aVq2YwpNcevtVgRrA8DTJTW3Ek6u+XMo5cXH7VEy9gGeB+qU7TT52zWuSarCqR0fHc+x3T6SPWqAVhHPBK37wbI/grIjGZsfDMbVlTkSwSdUbYcDwIdAisiuv+xHqUHE4dpjIogLHiZgABpDylupWavM8/dRQvaUPvATnzQqRHIIZw4B2iLWijh9H3JbJqldWN0pqxh2NUjmQsr/5COhLQBwPlHo5JYPmVsJ9VbpqtzeczkkrpvDJreYhKNKwCoAjaFm2eosiw5RKh96RYJUAJqjUitpa1TfirxEaO+kNPcSAD4kakR2hLLjErZX6reR9t9qiFMAWiSIyK6Xwq3NkvgR+jXN+zlrsA4AzkeNyI5IbhgLG8zOYyeXFdQpAE1zjlaSD7lPHT/JLPZTCIkBaAjzJnEmyGOHVuRHjozFik1hVkQATZFL3t+o1Upqm0aaaitBuy+RNRFAQ2S14r7JxW9vrvPbDVaK1waAK1BardQuNGYbpR45ckkQADRM+SIIAKAItAIAJ7NDKztG5pmyB3hGlG8wb9QPn9xj0BYAGqVgHE5K3d8xwMK2D8AzIQqdjEdDumGcxqHfpFCvlfy+0A6qz0sDwAfBvP6uNm9lp1a6rj+nXHHEh1YAGqQib0Ubcc2Ptc3j+v259wQRYQvQKObVux8LtOJt5XjVSskmz9zjtcNJ5cr2oWgFoEHMS08riUM3a3vFH48t1Upvj2bByR8KAK1hXr77b7laGbVTO75ICsJP1g3pU72CVgAaxbz8RtaK/tb6Hdu8VrYN6lOn4tAKQKMoWtkWOlFCUuCGrFbCi5zPUgFaAWgUWStuQRKoIDRDTitBaNN5gZFoBaBRpN5K2AJxx2xDq+Tqj2jQ//jl7s4noxWABgl2giahr3rbHeqGUeiOpLUiOuR4nu36MWgFoEHMq3c/ZS70WK5O7a0wX5vSilqY2P6EZDi0AtAo0fVjoz60Jg3tq72S9GLnVgEd0MIpagKAOxBcP6YjH+xTSoaSBso6eVfrhnhkj6IFoCWiE8wifk3ir5TCd7qiJ7sJAjEAXIds3op+q48w3HbaNg8APF3MQ306HABAgj2hkwAACUojsmtjI+UdIsInAZ4BNVrpelsqhDkObghbMuuUy63ri2EALkjpIqiuGatNs

發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
亚洲香蕉成人av网站在线观看_欧美精品成人91久久久久久久_久久久久久久久久久亚洲_热久久视久久精品18亚洲精品_国产精自产拍久久久久久_亚洲色图国产精品_91精品国产网站_中文字幕欧美日韩精品_国产精品久久久久久亚洲调教_国产精品久久一区_性夜试看影院91社区_97在线观看视频国产_68精品久久久久久欧美_欧美精品在线观看_国产精品一区二区久久精品_欧美老女人bb
国产日韩中文字幕在线| 久久手机免费视频| 欧美国产视频日韩| 成人综合网网址| 黑人巨大精品欧美一区二区一视频| 宅男66日本亚洲欧美视频| 亚洲成人激情在线| 日韩av成人在线观看| 国产成人精品免费久久久久| www.日韩不卡电影av| 韩剧1988免费观看全集| 国产午夜精品视频| 狠狠色狠狠色综合日日五| 日韩欧美一区二区三区| 国内精品视频一区| 日韩精品高清在线| 日本欧美黄网站| 国产精品久久一区| 亚洲国产一区二区三区四区| 91免费福利视频| 久久久久久久一区二区三区| 国产噜噜噜噜久久久久久久久| 亚洲无亚洲人成网站77777| 国产精品99蜜臀久久不卡二区| 成人中文字幕+乱码+中文字幕| 亚洲天堂av在线免费观看| 欧洲一区二区视频| 久久精品亚洲精品| 亚洲精品中文字幕女同| 日韩成人中文字幕在线观看| 日韩va亚洲va欧洲va国产| 国产精品成人免费电影| 91国产美女在线观看| 一区二区国产精品视频| 伊人青青综合网站| 日韩在线视频中文字幕| 精品国产一区二区三区久久狼5月| 欧美乱大交做爰xxxⅹ性3| 在线精品播放av| 亚洲白虎美女被爆操| 全球成人中文在线| 久久精品这里热有精品| 少妇精69xxtheporn| 久久99精品久久久久久噜噜| 2024亚洲男人天堂| 日韩av综合中文字幕| 精品美女久久久久久免费| 91sao在线观看国产| 日本韩国欧美精品大片卡二| 色偷偷9999www| 国产精品观看在线亚洲人成网| 日韩精品久久久久| 麻豆国产va免费精品高清在线| 91免费看片在线| 欧美激情一级二级| 亚洲精品视频在线播放| 国产视频在线一区二区| 韩剧1988免费观看全集| 亚洲一区二区三区毛片| 亚洲精品日韩欧美| 欧美日韩午夜视频在线观看| 欧美人与性动交| 久久精品成人欧美大片古装| 成人免费看吃奶视频网站| 日韩在线视频导航| 色777狠狠综合秋免鲁丝| 国产精品久久久久久久久免费看| 日本欧美爱爱爱| 久久精品国产欧美激情| 日韩免费在线观看视频| 亚洲精品福利视频| 亚洲一区二区精品| 美女扒开尿口让男人操亚洲视频网站| 国产成人精品在线观看| 亚洲激情视频网站| 国产在线观看不卡| 夜夜嗨av一区二区三区四区| 国产一区二区三区四区福利| 一区二区三区精品99久久| 国产日本欧美视频| 亚洲国产欧美日韩精品| 亚洲国产古装精品网站| 欧美第一黄色网| 欧美性极品xxxx娇小| 日韩激情av在线免费观看| 国产精品扒开腿做爽爽爽男男| 国产精品香蕉在线观看| 九九久久久久久久久激情| 免费91麻豆精品国产自产在线观看| 一本色道久久88综合亚洲精品ⅰ| 福利一区视频在线观看| 亚洲国产精品中文| 亚洲免费中文字幕| 九九精品视频在线观看| 91九色精品视频| 国产a∨精品一区二区三区不卡| 国产精品成人av在线| 91久久精品美女高潮| 国语自产在线不卡| 久久五月情影视| 精品日本美女福利在线观看| 久久久中文字幕| 国产精品亚发布| 色偷偷888欧美精品久久久| 91在线无精精品一区二区| 91亚洲精华国产精华| 国产suv精品一区二区三区88区| 插插插亚洲综合网| 日韩精品免费综合视频在线播放| 亚洲天堂av女优| 97av在线播放| 欧美日韩国产成人在线观看| 久久91超碰青草是什么| 欧美激情伊人电影| 久久久成人精品视频| 超碰精品一区二区三区乱码| 久久在线免费视频| 欧美日韩国产丝袜美女| 欧美日韩国产综合新一区| 日韩在线视频中文字幕| 亚洲老司机av| 欧美激情精品久久久久久大尺度| 51色欧美片视频在线观看| 成人字幕网zmw| 欧美乱人伦中文字幕在线| 国产999在线| 成人伊人精品色xxxx视频| 亚洲人成人99网站| 亚洲人成亚洲人成在线观看| 91国在线精品国内播放| 亚洲国产女人aaa毛片在线| 色与欲影视天天看综合网| 亚洲成av人片在线观看香蕉| 成人黄色在线播放| 欧美日本在线视频中文字字幕| 日韩精品免费一线在线观看| 亚洲二区在线播放视频| 精品日韩中文字幕| 韩国国内大量揄拍精品视频| 亚洲国产精品va在线看黑人动漫| 国产一区二区在线免费| 国产91精品青草社区| 国产欧美久久一区二区| 亚洲高清久久网| 91精品久久久久久久久不口人| 欧美老女人性视频| 国产精品吴梦梦| 麻豆国产va免费精品高清在线| 2024亚洲男人天堂| 日韩中文字幕精品视频| 日韩成人av在线| 91精品国产91久久久久久不卡| 97人人模人人爽人人喊中文字| 久久久久在线观看| 在线精品播放av| 国产69精品久久久久久| 国产日韩换脸av一区在线观看| 国产精品aaaa| 亚洲第五色综合网| 欧美美最猛性xxxxxx| 欧美在线视频观看免费网站| 久久天天躁夜夜躁狠狠躁2022| 亚洲人成77777在线观看网| 国产精品久久久91|