亚洲香蕉成人av网站在线观看_欧美精品成人91久久久久久久_久久久久久久久久久亚洲_热久久视久久精品18亚洲精品_国产精自产拍久久久久久_亚洲色图国产精品_91精品国产网站_中文字幕欧美日韩精品_国产精品久久久久久亚洲调教_国产精品久久一区_性夜试看影院91社区_97在线观看视频国产_68精品久久久久久欧美_欧美精品在线观看_国产精品一区二区久久精品_欧美老女人bb

首頁 > 服務器 > Web服務器 > 正文

基于CentOS的Hadoop分布式環境的搭建開發

2024-09-01 13:49:17
字體:
來源:轉載
供稿:網友

首先,要說明的一點的是,我不想重復發明輪子。如果想要搭建Hadoop環境,網上有很多詳細的步驟和命令代碼,我不想再重復記錄。

其次,我要說的是我也是新手,對于Hadoop也不是很熟悉。但是就是想實際搭建好環境,看看他的廬山真面目,還好,還好,最好看到了。當運行wordcount詞頻統計的時候,實在是感嘆hadoop已經把分布式做的如此之好,即使沒有分布式相關經驗的人,也只需要做一些配置即可運行分布式集群環境。

好了,言歸真傳。

在搭建Hadoop環境中你要知道的一些事兒:

1.hadoop運行于Linux系統之上,你要安裝Linux操作系統

2.你需要搭建一個運行hadoop的集群,例如局域網內能互相訪問的linux系統

3.為了實現集群之間的相互訪問,你需要做到ssh無密鑰登錄

4.hadoop的運行在JVM上的,也就是說你需要安裝Java的JDK,并配置好JAVA_HOME

5.hadoop的各個組件是通過XML來配置的。在官網上下載好hadoop之后解壓縮,修改/etc/hadoop目錄中相應的配置文件

工欲善其事,必先利其器。這里也要說一下,在搭建hadoop環境中使用到的相關軟件和工具:

1.VirtualBox——畢竟要模擬幾臺linux,條件有限,就在VirtualBox中創建幾臺虛擬機樓

2.CentOS——下載的CentOS7的iso鏡像,加載到VirtualBox中,安裝運行

3.secureCRT——可以SSH遠程訪問linux的軟件

4.WinSCP——實現windows和Linux的通信

5.JDK for linux——Oracle官網上下載,解壓縮之后配置一下即可

6.hadoop2.7.1——可在Apache官網上下載

好了,下面分三個步驟來講解

Linux環境準備

 配置IP

為了實現本機和虛擬機以及虛擬機和虛擬機之間的通信,VirtualBox中設置CentOS的連接模式為Host-Only模式,并且手動設置IP,注意虛擬機的網關和本機中host-only network 的IP地址相同。配置IP完成后還要重啟網絡服務以使得配置有效。這里搭建了三臺Linux,如下圖所示

centos搭建hadoop環境,hadoop分布式環境搭建,hadoop分布式搭建

centos搭建hadoop環境,hadoop分布式環境搭建,hadoop分布式搭建

centos搭建hadoop環境,hadoop分布式環境搭建,hadoop分布式搭建

centos搭建hadoop環境,hadoop分布式環境搭建,hadoop分布式搭建

配置主機名字

對于192.168.56.101設置主機名字hadoop01。并在hosts文件中配置集群的IP和主機名。其余兩個主機的操作與此類似

[root@hadoop01 ~]# cat /etc/sysconfig/network # Created by anaconda NETWORKING = yes HOSTNAME = hadoop01   [root@hadoop01 ~]# cat /etc/hosts 127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1     localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.56.101 hadoop01 192.168.56.102 hadoop02 192.168.56.103 hadoop03 

永久關閉防火墻

service iptables stop(1.下次重啟機器后,防火墻又會啟動,故需要永久關閉防火墻的命令;2由于用的是CentOS 7,關閉防火墻的命令如下)

systemctl stop firewalld.service    #停止firewallsystemctl disable firewalld.service #禁止firewall開機啟動

關閉SeLinux防護系統

改為disabled 。reboot重啟機器,使配置生效

[root@hadoop02 ~]# cat /etc/sysconfig/selinux  # This file controls the state of SELinux on the system # SELINUX= can take one of these three values: #   enforcing - SELinux security policy is enforced  #   permissive - SELinux prints warnings instead of enforcing #   disabled - No SELinux policy is loaded SELINUX=disabled # SELINUXTYPE= can take one of three two values: #   targeted - Targeted processes are protected, #   minimum - Modification of targeted policy Only selected processes are protected #   mls - Multi Level Security protection SELINUXTYPE=targeted  

集群SSH免密碼登錄

首先設置ssh密鑰

ssh-keygen -t rsa 

拷貝ssh密鑰到三臺機器

ssh-copy-id 192.168.56.101 <pre name="code" class="plain">ssh-copy-id 192.168.56.102 
ssh-copy-id 192.168.56.103

這樣如果hadoop01的機器想要登錄hadoop02,直接輸入ssh hadoop02

<pre name="code" class="plain">ssh hadoop02 

配置JDK

這里在/home忠誠創建三個文件夾中

tools——存放工具包

softwares——存放軟件

data——存放數據

通過WinSCP將下載好的Linux JDK上傳到hadoop01的/home/tools中

解壓縮JDK到softwares中

<pre name="code" class="plain">tar -zxf jdk-7u76-linux-x64.tar.gz -C /home/softwares 

可見JDK的家目錄在/home/softwares/JDK.x.x.x,將該目錄拷貝粘貼到/etc/profile文件中,并且在文件中設置JAVA_HOME

export JAVA_HOME=/home/softwares/jdk0_111 export PATH=$PATH:$JAVA_HOME/bin 

保存修改,執行source /etc/profile使配置生效

查看Java jdk是否安裝成功:

java -version 

可以將當前節點中設置的文件拷貝到其他節點

scp -r /home/* root@192.168.56.10X:/home 

Hadoop集群安裝

集群的規劃如下:

101節點作為HDFS的NameNode ,其余作為DataNode;102作為YARN的ResourceManager,其余作為NodeManager。103作為SecondaryNameNode。分別在101和102節點啟動JobHistoryServer和WebAppProxyServercentos搭建hadoop環境,hadoop分布式環境搭建,hadoop分布式搭建

下載hadoop-2.7.3

并將其放在/home/softwares文件夾中。由于hadoop需要JDK的安裝環境,所以首先配置/etc/hadoop/hadoop-env.sh的JAVA_HOME

(PS:感覺我用的jdk版本過高了)centos搭建hadoop環境,hadoop分布式環境搭建,hadoop分布式搭建

接下來依次修改hadoop相應組件對應的XML

修改core-site.xml :

指定namenode地址

修改hadoop的緩存目錄

hadoop的垃圾回收機制

<configuration>   <property>     <name>fsdefaultFS</name>     <value>hdfs://101:8020</value>   </property>   <property>     <name>hadooptmpdir</name>     <value>/home/softwares/hadoop-3/data/tmp</value>   </property>   <property>     <name>fstrashinterval</name>     <value>10080</value>   </property>    </configuration> 

hdfs-site.xml

設置備份數目

關閉權限

設置http訪問接口

設置secondary namenode 的IP地址

<configuration>   <property>     <name>dfsreplication</name>     <value>3</value>   </property>   <property>     <name>dfspermissionsenabled</name>     <value>false</value>   </property>   <property>     <name>dfsnamenodehttp-address</name>     <value>101:50070</value>   </property>   <property>     <name>dfsnamenodesecondaryhttp-address</name>     <value>103:50090</value>   </property> </configuration> 

 修改mapred-site.xml.template名字為mapred-site.xml

指定mapreduce的框架為yarn,通過yarn來調度

指定jobhitory

指定jobhitory的web端口

開啟uber模式——這是針對mapreduce的優化

<configuration>   <property>     <name>mapreduceframeworkname</name>     <value>yarn</value>   </property>   <property>     <name>mapreducejobhistoryaddress</name>     <value>101:10020</value>   </property>   <property>     <name>mapreducejobhistorywebappaddress</name>     <value>101:19888</value>   </property>   <property>     <name>mapreducejobubertaskenable</name>     <value>true</value>   </property> </configuration> 

修改yarn-site.xml

指定mapreduce為shuffle

指定102節點為resourcemanager

指定102節點的安全代理

開啟yarn的日志

指定yarn日志刪除時間

指定nodemanager的內存:8G

指定nodemanager的CPU:8核

<configuration>  <!-- Site specific YARN configuration properties -->   <property>     <name>yarnnodemanageraux-services</name>     <value>mapreduce_shuffle</value>   </property>   <property>     <name>yarnresourcemanagerhostname</name>     <value>102</value>   </property>   <property>     <name>yarnweb-proxyaddress</name>     <value>102:8888</value>   </property>   <property>     <name>yarnlog-aggregation-enable</name>     <value>true</value>   </property>   <property>     <name>yarnlog-aggregationretain-seconds</name>     <value>604800</value>   </property>   <property>     <name>yarnnodemanagerresourcememory-mb</name>     <value>8192</value>   </property>   <property>     <name>yarnnodemanagerresourcecpu-vcores</name>     <value>8</value>   </property>  </configuration> 

配置slaves

指定計算節點,即運行datanode和nodemanager的節點

192.168.56.101 
192.168.56.102 
192.168.56.103 

先在namenode節點格式化,即101節點上執行:

進入到hadoop主目錄: cd /home/softwares/hadoop-3  

執行bin目錄下的hadoop腳本: bin/hadoop namenode -format 

出現successful format才算是執行成功(PS,這里是盜用別人的圖,不要介意哈) centos搭建hadoop環境,hadoop分布式環境搭建,hadoop分布式搭建

 以上配置完成后,將其拷貝到其他的機器

Hadoop環境測試

進入hadoop主目錄下執行相應的腳本文件

jps命令——java Virtual Machine Process Status,顯示運行的java進程

在namenode節點101機器上開啟hdfs

[root@hadoop01 hadoop-3]# sbin/start-dfssh  Java HotSpot(TM) Client VM warning: You have loaded library /home/softwares/hadoop-3/lib/native/libhadoopso which might have disabled stack guard The VM will try to fix the stack guard now It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack' 16/11/07 16:49:19 WARN utilNativeCodeLoader: Unable to load native-hadoop library for your platform using builtin-java classes where applicable Starting namenodes on [hadoop01] hadoop01: starting namenode, logging to /home/softwares/hadoop-3/logs/hadoop-root-namenode-hadoopout 102: starting datanode, logging to /home/softwares/hadoop-3/logs/hadoop-root-datanode-hadoopout 103: starting datanode, logging to /home/softwares/hadoop-3/logs/hadoop-root-datanode-hadoopout 101: starting datanode, logging to /home/softwares/hadoop-3/logs/hadoop-root-datanode-hadoopout Starting secondary namenodes [hadoop03] hadoop03: starting secondarynamenode, logging to /home/softwares/hadoop-3/logs/hadoop-root-secondarynamenode-hadoopout 

此時101節點上執行jps,可以看到namenode和datanode已經啟動

[root@hadoop01 hadoop-3]# jps 7826 Jps 7270 DataNode 7052 NameNode 

在102和103節點執行jps,則可以看到datanode已經啟動

[root@hadoop02 bin]# jps 4260 DataNode 4488 Jps  [root@hadoop03 ~]# jps 6436 SecondaryNameNode 6750 Jps 6191 DataNode 

啟動yarn

在102節點執行

[root@hadoop02 hadoop-3]# sbin/start-yarnsh  starting yarn daemons starting resourcemanager, logging to /home/softwares/hadoop-3/logs/yarn-root-resourcemanager-hadoopout 101: starting nodemanager, logging to /home/softwares/hadoop-3/logs/yarn-root-nodemanager-hadoopout 103: starting nodemanager, logging to /home/softwares/hadoop-3/logs/yarn-root-nodemanager-hadoopout 102: starting nodemanager, logging to /home/softwares/hadoop-3/logs/yarn-root-nodemanager-hadoopout 

jps查看各節點:

[root@hadoop02 hadoop-3]# jps 4641 ResourceManager 4260 DataNode 4765 NodeManager 5165 Jps   [root@hadoop01 hadoop-3]# jps 7270 DataNode 8375 Jps 7976 NodeManager 7052 NameNode   [root@hadoop03 ~]# jps 6915 NodeManager 6436 SecondaryNameNode 7287 Jps 6191 DataNode 

分別啟動相應節點的jobhistory和防護進程

[root@hadoop01 hadoop-3]# sbin/mr-jobhistory-daemonsh start historyserver starting historyserver, logging to /home/softwares/hadoop-3/logs/mapred-root-historyserver-hadoopout [root@hadoop01 hadoop-3]# jps 8624 Jps 7270 DataNode 7976 NodeManager 8553 JobHistoryServer 7052 NameNode  [root@hadoop02 hadoop-3]# sbin/yarn-daemonsh start proxyserver starting proxyserver, logging to /home/softwares/hadoop-3/logs/yarn-root-proxyserver-hadoopout [root@hadoop02 hadoop-3]# jps 4641 ResourceManager 4260 DataNode 5367 WebAppProxyServer 5402 Jps 4765 NodeManager 

在hadoop01節點,即101節點上,通過瀏覽器查看節點狀況 centos搭建hadoop環境,hadoop分布式環境搭建,hadoop分布式搭建centos搭建hadoop環境,hadoop分布式環境搭建,hadoop分布式搭建

hdfs上傳文件

[root@hadoop01 hadoop-3]# bin/hdfs dfs -put /etc/profile /profile 

運行wordcount程序

[root@hadoop01 hadoop-3]# bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-jar wordcount /profile /fll_out Java HotSpot(TM) Client VM warning: You have loaded library /home/softwares/hadoop-3/lib/native/libhadoopso which might have disabled stack guard The VM will try to fix the stack guard now It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack' 16/11/07 17:17:10 WARN utilNativeCodeLoader: Unable to load native-hadoop library for your platform using builtin-java classes where applicable 16/11/07 17:17:12 INFO clientRMProxy: Connecting to ResourceManager at /102:8032 16/11/07 17:17:18 INFO inputFileInputFormat: Total input paths to process : 1 16/11/07 17:17:19 INFO mapreduceJobSubmitter: number of splits:1 16/11/07 17:17:19 INFO mapreduceJobSubmitter: Submitting tokens for job: job_1478509135878_0001 16/11/07 17:17:20 INFO implYarnClientImpl: Submitted application application_1478509135878_0001 16/11/07 17:17:20 INFO mapreduceJob: The url to track the job: http://102:8888/proxy/application_1478509135878_0001/ 16/11/07 17:17:20 INFO mapreduceJob: Running job: job_1478509135878_0001 16/11/07 17:18:34 INFO mapreduceJob: Job job_1478509135878_0001 running in uber mode : true 16/11/07 17:18:35 INFO mapreduceJob: map 0% reduce 0% 16/11/07 17:18:43 INFO mapreduceJob: map 100% reduce 0% 16/11/07 17:18:50 INFO mapreduceJob: map 100% reduce 100% 16/11/07 17:18:55 INFO mapreduceJob: Job job_1478509135878_0001 completed successfully 16/11/07 17:18:59 INFO mapreduceJob: Counters: 52     File System Counters         FILE: Number of bytes read=4264         FILE: Number of bytes written=6412         FILE: Number of read operations=0         FILE: Number of large read operations=0         FILE: Number of write operations=0         HDFS: Number of bytes read=3940         HDFS: Number of bytes written=261673         HDFS: Number of read operations=35         HDFS: Number of large read operations=0         HDFS: Number of write operations=8     Job Counters          Launched map tasks=1         Launched reduce tasks=1         Other local map tasks=1         Total time spent by all maps in occupied slots (ms)=8246         Total time spent by all reduces in occupied slots (ms)=7538         TOTAL_LAUNCHED_UBERTASKS=2         NUM_UBER_SUBMAPS=1         NUM_UBER_SUBREDUCES=1         Total time spent by all map tasks (ms)=8246         Total time spent by all reduce tasks (ms)=7538         Total vcore-milliseconds taken by all map tasks=8246         Total vcore-milliseconds taken by all reduce tasks=7538         Total megabyte-milliseconds taken by all map tasks=8443904         Total megabyte-milliseconds taken by all reduce tasks=7718912     Map-Reduce Framework         Map input records=78         Map output records=256         Map output bytes=2605         Map output materialized bytes=2116         Input split bytes=99         Combine input records=256         Combine output records=156         Reduce input groups=156         Reduce shuffle bytes=2116         Reduce input records=156         Reduce output records=156         Spilled Records=312         Shuffled Maps =1         Failed Shuffles=0         Merged Map outputs=1         GC time elapsed (ms)=870         CPU time spent (ms)=1970         Physical memory (bytes) snapshot=243326976         Virtual memory (bytes) snapshot=2666557440         Total committed heap usage (bytes)=256876544     Shuffle Errors         BAD_ID=0         CONNECTION=0         IO_ERROR=0         WRONG_LENGTH=0         WRONG_MAP=0         WRONG_REDUCE=0     File Input Format Counters          Bytes Read=1829     File Output Format Counters          Bytes Written=1487 

瀏覽器中通過YARN查看運行狀態 centos搭建hadoop環境,hadoop分布式環境搭建,hadoop分布式搭建

查看最后的詞頻統計結果

瀏覽器中查看hdfs的文件系統centos搭建hadoop環境,hadoop分布式環境搭建,hadoop分布式搭建

[root@hadoop01 hadoop-3]# bin/hdfs dfs -cat /fll_out/part-r-00000 Java HotSpot(TM) Client VM warning: You have loaded library /home/softwares/hadoop-3/lib/native/libhadoopso which might have disabled stack guard The VM will try to fix the stack guard now It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack' 16/11/07 17:29:17 WARN utilNativeCodeLoader: Unable to load native-hadoop library for your platform using builtin-java classes where applicable !=   1 "$-"  1 "$2"  1 "$EUID" 2 "$HISTCONTROL" 1 "$i"  3 "${-#*i}"    1 "0"   1 ":${PATH}:"   1 "`id  2 "after" 1 "ignorespace"  1 #    13 $UID  1 &&   1 ()   1 *)   1 *:"$1":*)    1 -f   1 -gn`"  1 -gt   1 -r   1 -ru`  1 -u`   1 -un`"  2 -x   1 -z   1     2 /etc/bashrc   1 /etc/profile  1 /etc/profiled/ 1 /etc/profiled/*sh   1 /usr/bin/id   1 /usr/local/sbin 2 /usr/sbin    2 /usr/share/doc/setup-*/uidgid  1 002   1 022   1 199   1 200   1 2>/dev/null`  1 ;    3 ;;   1 =    4 >/dev/null   1 By   1 Current 1 EUID=`id    1 Functions    1 HISTCONTROL   1 HISTCONTROL=ignoreboth 1 HISTCONTROL=ignoredups 1 HISTSIZE    1 HISTSIZE=1000  1 HOSTNAME    1 HOSTNAME=`/usr/bin/hostname   1 It's  2 JAVA_HOME=/home/softwares/jdk0_111 1 LOGNAME 1 LOGNAME=$USER  1 MAIL  1 MAIL="/var/spool/mail/$USER"  1 NOT   1 PATH  1 PATH=$1:$PATH  1 PATH=$PATH:$1  1 PATH=$PATH:$JAVA_HOME/bin    1 Path  1 System 1 This  1 UID=`id 1 USER  1 USER="`id    1 You   1 [    9 ]    3 ];   6 a    2 after  2 aliases 1 and   2 are   1 as   1 better 1 case  1 change 1 changes 1 check  1 could  1 create 1 custom 1 customsh    1 default,    1 do   1 doing 1 done  1 else  5 environment   1 environment,  1 esac  1 export 5 fi   8 file  2 for   5 future 1 get   1 go   1 good  1 i    2 idea  1 if   8 in   6 is   1 it   1 know  1 ksh   1 login  2 make  1 manipulation  1 merging 1 much  1 need  1 pathmunge    6 prevent 1 programs,    1 reservation   1 reserved    1 script 1 set  1 sets  1 setup  1 shell  2 startup 1 system 1 the   1 then  8 this  2 threshold    1 to   5 uid/gids    1 uidgid 1 umask  3 unless 1 unset  2 updates    1 validity    1 want  1 we   1 what  1 wide  1 will  1 workaround   1 you   2 your  1 {    1 }    1 

這就代表hadoop集群正確

以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持VEVB武林網。


發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
亚洲香蕉成人av网站在线观看_欧美精品成人91久久久久久久_久久久久久久久久久亚洲_热久久视久久精品18亚洲精品_国产精自产拍久久久久久_亚洲色图国产精品_91精品国产网站_中文字幕欧美日韩精品_国产精品久久久久久亚洲调教_国产精品久久一区_性夜试看影院91社区_97在线观看视频国产_68精品久久久久久欧美_欧美精品在线观看_国产精品一区二区久久精品_欧美老女人bb
国产999精品久久久影片官网| 欧美亚洲国产日韩2020| 久久亚洲精品成人| 欧日韩在线观看| 一区二区三区视频免费| 欧美成人免费全部| 4438全国亚洲精品在线观看视频| 丁香五六月婷婷久久激情| 国产精品吴梦梦| 91深夜福利视频| 国产精品中文字幕久久久| 97热精品视频官网| 亚洲日韩中文字幕在线播放| 精品国产乱码久久久久久天美| 亚洲自拍偷拍网址| 欧美精品www在线观看| 最新91在线视频| 国产ts人妖一区二区三区| 午夜精品一区二区三区av| 在线日韩日本国产亚洲| 欧美成年人视频网站| 国外色69视频在线观看| 中文字幕亚洲综合久久| 97精品视频在线| 国产成人午夜视频网址| 一区二区欧美久久| 亚洲欧洲国产一区| 日韩日本欧美亚洲| 欧美性猛交99久久久久99按摩| 国产精品一区二区电影| 久久久精品亚洲| 亚洲伊人成综合成人网| 久久久伊人日本| 波霸ol色综合久久| 中文字幕日韩av电影| 国产福利成人在线| 97精品国产97久久久久久免费| 亚洲图片在区色| 国产精品影片在线观看| 尤物yw午夜国产精品视频明星| 韩国美女主播一区| 夜夜狂射影院欧美极品| 日韩在线视频观看| 91成人国产在线观看| 亚洲男人的天堂在线播放| 欧美裸体男粗大视频在线观看| 中文字幕在线视频日韩| 最近2019中文字幕大全第二页| 欧美日韩国内自拍| 日韩av一卡二卡| 亚洲精品av在线| 欧美日韩成人在线视频| 国产91免费看片| 欧美精品videosex性欧美| 日韩高清电影好看的电视剧电影| 66m—66摸成人免费视频| 欧美性猛交xxxxx水多| 久久久久久久久久久网站| 国产精品三级在线| 国产成人精品在线播放| 亚洲成人在线网| 色一情一乱一区二区| 成人激情视频在线观看| 成人欧美一区二区三区在线| 亚洲国产精彩中文乱码av| 日韩欧美亚洲国产一区| 日本高清不卡的在线| 精品人伦一区二区三区蜜桃网站| 最近2019中文免费高清视频观看www99| 欧美激情伊人电影| 日韩欧美在线国产| 亚洲福利在线视频| 国产亚洲精品久久久久动| 国产性色av一区二区| 亚洲成人久久久久| 亚洲欧美国产一本综合首页| 亚洲精品日韩丝袜精品| 在线观看成人黄色| 国产精品久久久久久久久久ktv| 国产精品免费电影| 亚洲欧洲黄色网| 日韩在线资源网| 亚洲激情电影中文字幕| 亚洲性生活视频| 久久久久久久国产精品视频| 日韩网站免费观看高清| 久久国产精彩视频| 欧美日韩国产精品一区二区三区四区| 成人黄色大片在线免费观看| 亚洲国产成人精品电影| 欧美成人激情视频| 亚洲精品小视频在线观看| 久久99精品久久久久久噜噜| 国产精品私拍pans大尺度在线| 国产精品久久久久久超碰| 欧美精品少妇videofree| 国产成人亚洲精品| 亚洲a级在线播放观看| 97久久久免费福利网址| 国产精品96久久久久久又黄又硬| 国产99久久精品一区二区| 亚洲国产精品久久久久| 精品亚洲一区二区三区在线播放| 久久久免费观看视频| 欧美成人免费大片| 97国产在线观看| 中文字幕日韩专区| 精品国产一区二区三区久久狼黑人| 综合国产在线观看| 亚洲国产成人久久| 日韩av不卡在线| 久久91亚洲精品中文字幕| 久久久久国色av免费观看性色| 欧美午夜片欧美片在线观看| 久久伊人色综合| 国产欧美 在线欧美| 亚洲自拍偷拍在线| 91亚洲精品久久久| 久久久国产精品亚洲一区| 97久久超碰福利国产精品…| 欧美三级免费观看| 精品人伦一区二区三区蜜桃免费| 亚洲激情视频在线| 亚洲激情视频在线| 久久久久久国产精品久久| 亚洲色图综合网| 欧美成人精品在线视频| 国产精品自产拍高潮在线观看| 亚洲欧美精品一区| 欧美一级视频免费在线观看| 韩日精品中文字幕| 精品在线欧美视频| 都市激情亚洲色图| 91av在线播放视频| 亚洲黄在线观看| 国产日韩欧美成人| 亚洲免费一在线| 国产精品69久久久久| 欧美自拍大量在线观看| 精品中文字幕在线| 亚洲影院在线看| 6080yy精品一区二区三区| 精品国偷自产在线视频| 在线观看欧美日韩| 91经典在线视频| 欧美精品在线视频观看| 欧美日韩国产精品专区| 亚洲成人免费网站| 中文精品99久久国产香蕉| 日韩美女免费视频| 在线观看国产欧美| 成人免费激情视频| 成人免费看片视频| 欧美与黑人午夜性猛交久久久| 日韩精品免费在线| 日本成人在线视频网址| 欧美老女人性生活| 国内精品久久影院| 欧美视频在线观看免费网址| 热久久99这里有精品| 亚洲精品国产美女| 日本欧美精品在线| 欧美日韩国产丝袜另类| 欧美视频在线免费|