亚洲香蕉成人av网站在线观看_欧美精品成人91久久久久久久_久久久久久久久久久亚洲_热久久视久久精品18亚洲精品_国产精自产拍久久久久久_亚洲色图国产精品_91精品国产网站_中文字幕欧美日韩精品_国产精品久久久久久亚洲调教_国产精品久久一区_性夜试看影院91社区_97在线观看视频国产_68精品久久久久久欧美_欧美精品在线观看_国产精品一区二区久久精品_欧美老女人bb

首頁 > 開發 > 綜合 > 正文

PgSQL · 特性分析 · PostgreSQL Aurora方案與DEMO

2024-07-21 02:51:35
字體:
來源:轉載
供稿:網友

前言

亞馬遜推出的Aurora數據庫引擎,支持一份存儲,一主多讀的架構。這個架構和Oracle RAC類似,也是共享存儲,但是只有一個實例可以執行寫操作,其他實例只能執行讀操作。相比傳統的基于復制的一主多讀,節約了存儲和網絡帶寬的成本。

我們可以使用PostgreSQL的hot standby模式來模擬這種共享存儲一主多讀的架構,但是需要注意幾點,hot standby也會對數據庫有寫的動作,例如recovery時,會修改控制文件,數據文件等等,這些操作是多余的。另外很多狀態是存儲在內存中的,所以內存狀態也需要更新。

還有需要注意的是:

pg_xlogpg_logpg_clogpg_multixactpostgresql.confrecovery.confpostmaster.pid

最終實現一主多備的架構,需要通過改PG內核來實現:

這些文件應該是每個實例對應一份。postgresql.conf, recovery.conf, postmaster.pid, pg_controlhot standby不執行實際的恢復操作,但是需要更新自己的內存狀態,如當前的OID,XID等等,以及更新自己的pg_control。在多實例間,要實現主到備節點的OS臟頁的同步,數據庫shared buffer臟頁的同步。

模擬過程

不改任何代碼,在同一主機下啟多實例測試,會遇到一些問題。(后面有問題描述,以及如何修改代碼來修復這些問題)

主實例配置文件:

 # vi postgresql.conflisten_addresses='0.0.0.0'port=1921max_connections=100unix_socket_directories='.'ssl=onssl_ciphers='EXPORT40'shared_buffers=512MBhuge_pages=trymax_PRepared_transactions=0max_stack_depth=100kBdynamic_shared_memory_type=posixmax_files_per_process=500wal_level=logicalfsync=offsynchronous_commit=offwal_sync_method=open_datasyncfull_page_writes=offwal_log_hints=offwal_buffers=16MBwal_writer_delay=10mscheckpoint_segments=8archive_mode=offarchive_command='/bin/date'max_wal_senders=10max_replication_slots=10hot_standby=onwal_receiver_status_interval=1shot_standby_feedback=onenable_bitmapscan=onenable_hashagg=onenable_hashjoin=onenable_indexscan=onenable_material=onenable_mergejoin=onenable_nestloop=onenable_seqscan=onenable_sort=onenable_tidscan=onlog_destination='csvlog'logging_collector=onlog_directory='pg_log'log_truncate_on_rotation=onlog_rotation_size=10MBlog_checkpoints=onlog_connections=onlog_disconnections=onlog_duration=offlog_error_verbosity=verboselog_line_prefix='%ilog_statement='none'log_timezone='PRC'autovacuum=onlog_autovacuum_min_duration=0autovacuum_vacuum_scale_factor=0.0002autovacuum_analyze_scale_factor=0.0001datestyle='iso,timezone='PRC'lc_messages='C'lc_monetary='C'lc_numeric='C'lc_time='C'default_text_search_config='pg_catalog.english' # vi recovery.donerecovery_target_timeline='latest'standby_mode=onprimary_conninfo = 'host=127.0.0.1 port=1921 user=postgres keepalives_idle=60' # vi pg_hba.conflocal   replication     postgres                                trusthost    replication     postgres 127.0.0.1/32            trust

啟動主實例。

postgres@digoal-> pg_ctl start

啟動只讀實例,必須先刪除postmaster.pid,這點PostgreSQL新版本加了一個PATCH,如果這個文件被刪除,會自動關閉數據庫,所以我們需要注意,不要使用最新的PGSQL,或者把這個patch干掉先。

postgres@digoal-> cd $PGDATApostgres@digoal-> mv recovery.done recovery.confpostgres@digoal-> rm -f postmaster.pidpostgres@digoal-> pg_ctl start -o "-c log_directory=pg_log1922 -c port=1922"

查看當前控制文件狀態,只讀實例改了控制文件,和前面描述一致。

postgres@digoal-> pg_controldata |grep stateDatabase cluster state:               in archive recovery

連到主實例,創建表,插入測試數據。

psql -p 1921postgres=# create table test1(id int);CREATE TABLEpostgres=# insert into test1 select generate_series(1,10);INSERT 0 10

在只讀實例查看插入的數據。

postgres@digoal-> psql -h 127.0.0.1 -p 1922postgres=# select * from test1; id----  1  2  3  4  5  6  7  8  9 10(10 rows)

主實例執行檢查點后,控制文件狀態會改回生產狀態。

psql -p 1921postgres=# checkpoint;CHECKPOINTpostgres@digoal-> pg_controldata |grep stateDatabase cluster state:               in production

但是如果在只讀實例執行完檢查點,又會改回恢復狀態。

postgres@digoal-> psql -h 127.0.0.1 -p 1922psql (9.4.4)postgres=# checkpoint;CHECKPOINTpostgres@digoal-> pg_controldata |grep stateDatabase cluster state:               in archive recovery

注意到,上面的例子有1個問題,用流復制的話,會從主節點通過網絡拷貝XLOG記錄,并覆蓋同一份已經寫過的XLOG記錄的對應的OFFSET,這是一個問題,因為可能會造成主節點看到的數據不一致(比如一個數據塊改了多次,只讀實例在恢復時將它覆蓋到老的版本了,在主實例上看到的就會變成老版本的BLOCK,后面再來改這個問題,禁止只讀實例恢復數據)。

另一方面,我們知道PostgreSQL standby會從三個地方(流、pg_xlog、restore_command)讀取XLOG進行恢復,所以在共享存儲的環境中,我們完全沒有必要用流復制的方式,直接從pg_xlog目錄讀取即可。修改recovery.conf參數,將以下注釋

 # primary_conninfo = 'host=127.0.0.1 port=1921 user=postgres keepalives_idle=60'

重啟只讀實例。

pg_ctl stop -m fastpostgres@digoal-> pg_ctl start -o "-c log_directory=pg_log1922 -c port=1922"

重新測試數據一致性。主實例:

postgres=# insert into test1 select generate_series(1,10);INSERT 0 10postgres=# insert into test1 select generate_series(1,10);INSERT 0 10postgres=# insert into test1 select generate_series(1,10);INSERT 0 10postgres=# insert into test1 select generate_series(1,10);INSERT 0 10

只讀實例:

postgres=# select count(*) from test1; count-------    60(1 row)

問題分析和解決

截至目前,有幾個問題未解決:

standby還是要執行recovery的操作,recovery產生的write操作會隨著只讀實例數量的增加而增加。另外recovery有一個好處,解決了臟頁的問題,主實例shared buffer中的臟頁不需要額外的同步給只讀實例了。recovery還會帶來一個嚴重的BUG,回放可能和當前主節點操作同一個data page;或者回放時將塊回放到老的狀態,而實際上主節點又更新了這個塊,造成數據塊的不一致。如果此時只讀實例關閉,然后立即關閉主實例,數據庫再起來時,這個數據塊是不一致的;standby還是會改控制文件;在同一個$PGDATA下啟動實例,首先要刪除postmaster.pid;

關閉實例時,已經被刪除postmaster.pid的實例,只能通過找到postgres主進程的pid,然后發kill -s 15, 2或3的信號來關閉數據庫;

 static void set_mode(char *modeopt) {         if (strcmp(modeopt, "s") == 0 || strcmp(modeopt, "smart") == 0)         {                 shutdown_mode = SMART_MODE;                 sig = SIGTERM;         }         else if (strcmp(modeopt, "f") == 0 || strcmp(modeopt, "fast") == 0)         {                 shutdown_mode = FAST_MODE;                 sig = SIGINT;         }         else if (strcmp(modeopt, "i") == 0 || strcmp(modeopt, "immediate") == 0)         {                 shutdown_mode = IMMEDIATE_MODE;                 sig = SIGQUIT;         }         else         {                 write_stderr(_("%s: unrecognized shutdown mode /"%s/"/n"), progname, modeopt);                 do_advice();                 exit(1);         } }

當主節點刪除rel page時,只讀實例回放時,會報invalid xlog對應的rel page不存在的錯誤,這個也是只讀實例需要回放日志帶來的問題。非常容易重現這個問題,刪除一個表即可。

 2015-10-09 13:30:50.776 CST,,,2082,,561750ab.822,20,,2015-10-09 13:29:15 CST,1/0,0,WARNING,01000,"page 8 of relation base/151898/185251 does not exist",,,,,"xlog redo clean: rel 1663/151898/185251; blk 8 remxid 640632117",,,"report_invalid_page, xlogutils.c:67","" 2015-10-09 13:30:50.776 CST,,,2082,,561750ab.822,21,,2015-10-09 13:29:15 CST,1/0,0,PANIC,XX000,"WAL contains references to invalid pages",,,,,"xlog redo clean: rel 1663/151898/185251; blk 8 remxid 640632117",,,"log_invalid_page, xlogutils.c:91",""

這個報錯可以先注釋這一段來繞過,從而可以演示下去。

 src/backend/access/transam/xlogutils.c /* Log a reference to an invalid page */ static void log_invalid_page(RelFileNode node, ForkNumber forkno, BlockNumber blkno,                                  bool present) {   //////         /*          * Once recovery has reached a consistent state, the invalid-page table          * should be empty and remain so. If a reference to an invalid page is          * found after consistency is reached, PANIC immediately. This might seem          * aggressive, but it's better than letting the invalid reference linger          * in the hash table until the end of recovery and PANIC there, which          * might come only much later if this is a standby server.          */         //if (reachedConsistency)         //{         //      report_invalid_page(WARNING, node, forkno, blkno, present);         //      elog(PANIC, "WAL contains references to invalid pages");         //}由于本例是在同一個操作系統中演示,所以沒有遇到OS的dirty page cache的問題,如果是不同主機的環境,我們需要解決OS dirty page cache 的同步問題,或者消除dirty page cache,如使用direct IO?;蛘呒何募到y如gfs2。

如果要產品化,至少需要解決以上問題。

先解決Aurora實例寫數據文件、控制文件、檢查點的問題。

增加一個啟動參數,表示這個實例是否為Aurora實例(即只讀實例)

  # vi src/backend/utils/misc/guc.c /******** option records follow ********/ static struct config_bool ConfigureNamesBool[] = {         {                 {"aurora", PGC_POSTMASTER, CONN_AUTH_SETTINGS,                         gettext_noop("Enables advertising the server via Bonjour."),                         NULL                 },                 &aurora,                 false,                 NULL, NULL, NULL         },

新增變量

 # vi src/include/postmaster/postmaster.h extern bool aurora;

禁止Aurora實例更新控制文件

 # vi src/backend/access/transam/xlog.c #include "postmaster/postmaster.h" bool aurora; void UpdateControlFile(void) {         if (aurora) return;

禁止Aurora實例啟動bgwriter進程

 # vi src/backend/postmaster/bgwriter.c #include "postmaster/postmaster.h" bool  aurora; /*  * Main entry point for bgwriter process  *  * This is invoked from AuxiliaryProcessMain, which has already created the  * basic execution environment, but not enabled signals yet.  */ void BackgroundWriterMain(void) {   //////         pg_usleep(1000000L);         /*          * If an exception is encountered, processing resumes here.          *          * See notes in postgres.c about the design of this coding.          */         if (!aurora && sigsetjmp(local_sigjmp_buf, 1) != 0)         {   //////                 /*                  * Do one cycle of dirty-buffer writing.                  */                 if (!aurora) {                 can_hibernate = BgBufferSync();   //////                 }                 pg_usleep(1000000L);         } }

禁止Aurora實例啟動checkpointer進程

 # vi src/backend/postmaster/checkpointer.c #include "postmaster/postmaster.h" bool  aurora;   ////// /*  * Main entry point for checkpointer process  *  * This is invoked from AuxiliaryProcessMain, which has already created the  * basic execution environment, but not enabled signals yet.  */ void CheckpointerMain(void) {   //////         /*          * Loop forever          */         for (;;)         {                 bool            do_checkpoint = false;                 int                     flags = 0;                 pg_time_t       now;                 int                     elapsed_secs;                 int                     cur_timeout;                 int                     rc;                 pg_usleep(100000L);                 /* Clear any already-pending wakeups */                 if (!aurora)  ResetLatch(&MyProc->procLatch);                 /*                  * Process any requests or signals received recently.                  */                 if (!aurora) AbsorbFsyncRequests();                 if (!aurora && got_SIGHUP)                 {                         got_SIGHUP = false;                         ProcessConfigFile(PGC_SIGHUP);                         /*                          * Checkpointer is the last process to shut down, so we ask it to                          * hold the keys for a range of other tasks required most of which                          * have nothing to do with checkpointing at all.                          *                          * For various reasons, some config values can change dynamically                          * so the primary copy of them is held in shared memory to make                          * sure all backends see the same value.  We make Checkpointer                          * responsible for updating the shared memory copy if the                          * parameter setting changes because of SIGHUP.                          */                         UpdateSharedMemoryConfig();                 }                 if (!aurora && checkpoint_requested)                 {                         checkpoint_requested = false;                         do_checkpoint = true;                         BgWriterStats.m_requested_checkpoints++;                 }                 if (!aurora && shutdown_requested)                 {                         /*                          * From here on, elog(ERROR) should end with exit(1), not send                          * control back to the sigsetjmp block above                          */                         ExitOnAnyError = true;                         /* Close down the database */                         ShutdownXLOG(0, 0);                         /* Normal exit from the checkpointer is here */                         proc_exit(0);           /* done */                 }                 /*                  * Force a checkpoint if too much time has elapsed since the last one.                  * Note that we count a timed checkpoint in stats only when this                  * occurs without an external request, but we set the CAUSE_TIME flag                  * bit even if there is also an external request.                  */                 now = (pg_time_t) time(NULL);                 elapsed_secs = now - last_checkpoint_time;                 if (!aurora && elapsed_secs >= CheckPointTimeout)                 {                         if (!do_checkpoint)                                 BgWriterStats.m_timed_checkpoints++;                         do_checkpoint = true;                         flags |= CHECKPOINT_CAUSE_TIME;                 }                 /*                  * Do a checkpoint if requested.                  */                 if (!aurora && do_checkpoint)                 {                         bool            ckpt_performed = false;                         bool            do_restartpoint;                         /* use volatile pointer to prevent code rearrangement */                         volatile CheckpointerShmemStruct *cps = CheckpointerShmem;                         /*                          * Check if we should perform a checkpoint or a restartpoint. As a                          * side-effect, RecoveryInProgress() initializes TimeLineID if                          * it's not set yet.                          */                         do_restartpoint = RecoveryInProgress();                         /*                          * Atomically fetch the request flags to figure out what kind of a                          * checkpoint we should perform, and increase the started-counter                          * to acknowledge that we've started a new checkpoint.                          */                         SpinLockAcquire(&cps->ckpt_lck);                         flags |= cps->ckpt_flags;                         cps->ckpt_flags = 0;                         cps->ckpt_started++;                         SpinLockRelease(&cps->ckpt_lck);                         /*                          * The end-of-recovery checkpoint is a real checkpoint that's                          * performed while we're still in recovery.                          */                         if (flags & CHECKPOINT_END_OF_RECOVERY)                                 do_restartpoint = false;   //////                         ckpt_active = false;                 }                 /* Check for archive_timeout and switch xlog files if necessary. */                 if (!aurora) CheckArchiveTimeout();                 /*                  * Send off activity statistics to the stats collector.  (The reason                  * why we re-use bgwriter-related code for this is that the bgwriter                  * and checkpointer used to be just one process.  It's probably not                  * worth the trouble to split the stats support into two independent                  * stats message types.)                  */                 if (!aurora) pgstat_send_bgwriter();                 /*                  * Sleep until we are signaled or it's time for another checkpoint or                  * xlog file switch.                  */                 now = (pg_time_t) time(NULL);                 elapsed_secs = now - last_checkpoint_time;                 if (elapsed_secs >= CheckPointTimeout)                         continue;                       /* no sleep for us ... */                 cur_timeout = CheckPointTimeout - elapsed_secs;                 if (!aurora && XLogArchiveTimeout > 0 && !RecoveryInProgress())                 {                         elapsed_secs = now - last_xlog_switch_time;                         if (elapsed_secs >= XLogArchiveTimeout)                                 continue;               /* no sleep for us ... */                         cur_timeout = Min(cur_timeout, XLogArchiveTimeout - elapsed_secs);                 }                 if (!aurora) rc = WaitLatch(&MyProc->procLatch,                                            WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH,                                            cur_timeout * 1000L /* convert to ms */ );                 /*                  * Emergency bailout if postmaster has died.  This is to avoid the                  * necessity for manual cleanup of all postmaster children.                  */                 if (rc & WL_POSTMASTER_DEATH)                         exit(1);         } }   ////// /* SIGINT: set flag to run a normal checkpoint right away */ static void ReqCheckpointHandler(SIGNAL_ARGS) {         if (aurora)            return;         int                     save_errno = errno;         checkpoint_requested = true;         if (MyProc)                 SetLatch(&MyProc->procLatch);         errno = save_errno; }   ////// /*  * AbsorbFsyncRequests  *              Retrieve queued fsync requests and pass them to local smgr.  *  * This is exported because it must be called during CreateCheckPoint;  * we have to be sure we have accepted all pending requests just before  * we start fsync'ing.  Since CreateCheckPoint sometimes runs in  * non-checkpointer processes, do nothing if not checkpointer.  */ void AbsorbFsyncRequests(void) {         CheckpointerRequest *requests = NULL;         CheckpointerRequest *request;         int                     n;         if (!AmCheckpointerProcess() || aurora)                 return;   //////

禁止Aurora實例手工調用checkpoint命令

 # vi src/backend/tcop/utility.c #include "postmaster/postmaster.h" bool  aurora;   ////// void standard_ProcessUtility(Node *parsetree,                                                 const char *queryString,                                                 ProcessUtilityContext context,                                                 ParamListInfo params,                                                 DestReceiver *dest,                                                 char *completionTag) {   //////                 case T_CheckPointStmt:                    if (!superuser() || aurora)                                 ereport(ERROR,                                                 (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),                                                  errmsg("must be superuser to do CHECKPOINT")));

改完上面的代碼,重新編譯一下,現在接近一個DEMO了?,F在Aurora實例不會更新控制文件,不會寫數據文件,不會執行checkpoint,是我們想要的結果。啟動只讀實例時,加一個參數aurora=true,表示啟動Aurora實例。

pg_ctl start -o "-c log_directory=pg_log1922 -c port=1922 -c aurora=true"

不過要產品化,還有很多細節需要考慮,這只是一個DEMO。阿里云RDS的小伙伴們加油!

還有一種更保險的玩法,共享存儲多讀架構,需要存儲兩份數據。其中一份是主實例的存儲,它自己玩自己的,其他實例不對它做任何操作;另一份是standby的,這部作為共享存儲,給多個只讀實例來使用。

參考

https://aws.amazon.com/cn/rds/aurora/

src/backend/access/transam/xlog.c

 /*  * Open the WAL segment containing WAL position 'RecPtr'.  *  * The segment can be fetched via restore_command, or via walreceiver having  * streamed the record, or it can already be present in pg_xlog. Checking  * pg_xlog is mainly for crash recovery, but it will be polled in standby mode  * too, in case someone copies a new segment directly to pg_xlog. That is not  * documented or recommended, though.  *  * If 'fetching_ckpt' is true, we're fetching a checkpoint record, and should  * prepare to read WAL starting from RedoStartLSN after this.  *  * 'RecPtr' might not point to the beginning of the record we're interested  * in, it might also point to the page or segment header. In that case,  * 'tliRecPtr' is the position of the WAL record we're interested in. It is  * used to decide which timeline to stream the requested WAL from.  *  * If the record is not immediately available, the function returns false  * if we're not in standby mode. In standby mode, waits for it to become  * available.  *  * When the requested record becomes available, the function opens the file  * containing it (if not open already), and returns true. When end of standby  * mode is triggered by the user, and there is no more WAL available, returns  * false.  */ static bool WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,                                                         bool fetching_ckpt, XLogRecPtr tliRecPtr) {   //////         static pg_time_t last_fail_time = 0;         pg_time_t       now;         /*-------          * Standby mode is implemented by a state machine:          *          * 1. Read from either archive or pg_xlog (XLOG_FROM_ARCHIVE), or just          *        pg_xlog (XLOG_FROM_XLOG)          * 2. Check trigger file          * 3. Read from primary server via walreceiver (XLOG_FROM_STREAM)          * 4. Rescan timelines          * 5. Sleep 5 seconds, and loop back to 1.          *          * Failure to read from the current source advances the state machine to          * the next state.          *          * 'currentSource' indicates the current state. There are no currentSource          * values for "check trigger", "rescan timelines", and "sleep" states,          * those actions are taken when reading from the previous source fails, as          * part of advancing to the next state.          *-------          */
發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
亚洲香蕉成人av网站在线观看_欧美精品成人91久久久久久久_久久久久久久久久久亚洲_热久久视久久精品18亚洲精品_国产精自产拍久久久久久_亚洲色图国产精品_91精品国产网站_中文字幕欧美日韩精品_国产精品久久久久久亚洲调教_国产精品久久一区_性夜试看影院91社区_97在线观看视频国产_68精品久久久久久欧美_欧美精品在线观看_国产精品一区二区久久精品_欧美老女人bb
在线性视频日韩欧美| 精品中文字幕久久久久久| 美女啪啪无遮挡免费久久网站| 亚洲精品一区二区久| 成人免费高清完整版在线观看| 日本人成精品视频在线| 另类专区欧美制服同性| 91禁国产网站| 精品久久香蕉国产线看观看gif| 深夜福利国产精品| 日韩精品中文字幕在线播放| 国产在线精品一区免费香蕉| 国产精品久久av| 日本一区二区不卡| 国产日韩在线精品av| 日韩电影中文 亚洲精品乱码| 51视频国产精品一区二区| 777精品视频| 日本一本a高清免费不卡| 久久久久久欧美| 国产区精品视频| 亚洲男人第一av网站| 俺去了亚洲欧美日韩| 日韩av成人在线观看| 国产日韩欧美在线观看| 色在人av网站天堂精品| 国产不卡一区二区在线播放| 这里只有精品久久| 亚洲网站在线看| 国产精品99久久久久久白浆小说| 精品国产欧美一区二区三区成人| 黄色精品一区二区| 日韩av中文字幕在线播放| 日韩欧美中文免费| 日韩毛片在线看| 欧美日韩亚洲精品一区二区三区| 欧美成人久久久| 欧美午夜精品久久久久久人妖| 久久久久久久久电影| 色偷偷av一区二区三区| 国产在线98福利播放视频| 国产精自产拍久久久久久| 国产精品高潮呻吟久久av无限| 91久久精品国产91久久| 亚洲免费电影在线观看| 久热在线中文字幕色999舞| 国产精品国产自产拍高清av水多| 久久影院资源站| 亚洲天堂男人天堂女人天堂| 国产在线视频2019最新视频| 欧美精品www在线观看| xvideos国产精品| 国产免费亚洲高清| 日韩在线中文字幕| 精品国产依人香蕉在线精品| 欧美激情一级二级| 久久精品视频一| 日韩美女免费观看| 日韩精品在线视频观看| 国产在线日韩在线| 欧美日韩福利在线观看| 国产精品美女免费看| 日韩亚洲精品视频| 最近中文字幕2019免费| 日韩av在线免播放器| 亲爱的老师9免费观看全集电视剧| 日本亚洲欧洲色| 69av成年福利视频| 久久成人一区二区| 国产成人精品999| 国产精品视频久久久| 97精品视频在线| 国产精品激情自拍| 2019中文字幕全在线观看| 久久免费观看视频| 精品国产成人av| 国产999在线| 亚洲精品国产综合久久| 深夜福利91大全| 亚洲另类欧美自拍| 久久久久久久久亚洲| 亚洲视频免费一区| 欧美综合在线观看| 久久91精品国产91久久久| 久久99青青精品免费观看| 欧美激情精品久久久久久久变态| 日韩福利在线播放| 日韩一区二区久久久| 色婷婷综合久久久久| 日韩在线播放视频| 亚洲精品一区二区三区婷婷月| 夜夜嗨av一区二区三区免费区| 91久久久久久久一区二区| 日韩在线中文视频| 97激碰免费视频| 国产精品九九久久久久久久| 成人久久久久久久| 日韩电影免费观看在线观看| 亚洲一区二区自拍| 2018中文字幕一区二区三区| 午夜精品www| 国产成人亚洲综合91| 蜜月aⅴ免费一区二区三区| 欧美日韩福利电影| 91在线|亚洲| 国产精品扒开腿做爽爽爽的视频| 黑人巨大精品欧美一区二区| 欧美大奶子在线| 亚洲国产私拍精品国模在线观看| 久久久精品2019中文字幕神马| 亚洲综合社区网| 欧美老少做受xxxx高潮| 欧美风情在线观看| 久久成人免费视频| 国产精品扒开腿做爽爽爽视频| 国产日韩精品入口| 国产亚洲a∨片在线观看| 日韩在线视频网站| 国产va免费精品高清在线观看| 亚洲国产精彩中文乱码av在线播放| 国产精品久久久久久婷婷天堂| 亚洲国产天堂久久综合网| 午夜伦理精品一区| 国产精品日韩精品| 日韩中文字幕视频在线| 久久久久久成人精品| 在线日韩av观看| 久久久久国产精品免费网站| 日韩在线高清视频| 一区二区欧美在线| 成人做爰www免费看视频网站| 欧日韩在线观看| 成人两性免费视频| 性夜试看影院91社区| 国产福利精品视频| 久久亚洲国产精品成人av秋霞| 日韩免费中文字幕| 国产视频丨精品|在线观看| 亚洲欧洲偷拍精品| 亚洲一区二区三区乱码aⅴ蜜桃女| 久久久女女女女999久久| 91中文字幕在线观看| 亚州欧美日韩中文视频| 国产91精品网站| 亚洲美女在线观看| 91精品久久久久久久久青青| 成人国产精品日本在线| 日韩视频第一页| 欧美一区三区三区高中清蜜桃| 国产精品一区二区三区成人| 一本久久综合亚洲鲁鲁| 国产精品久久久久久av福利| 国产成人精品日本亚洲专区61| 欧美日韩在线观看视频小说| 欧美在线观看日本一区| 日韩在线视频中文字幕| 久久琪琪电影院| 热久久这里只有| 日韩在线观看免费av| 久久精品91久久香蕉加勒比| 国产亚洲精品91在线| 亚洲人成电影网站色www| 欧美性视频在线| 91高清免费在线观看|