2016年4月29日 星期五

RDMA 範例程式

RDMA 範例程式 


此程式是由 Writing RDMA applications on Linux Example programs  Roland Dreier 這位工程師撰寫的,但是由2007年撰寫的所以在複製程式碼有許多的問題,內部也有一點小bug應該是不小心打錯的,花了一下下午的時間將他補足跟測式執行,為一個由C撰寫兩個數字相加並使用RDMA 傳送的小範例程式。

順便一提,這裡的cc在linux 中就是gcc的意思,可以檢查一下/usr/bin底下的link就可以了。

user @ /usr/bin $ ls -l /usr/bin/cc

lrwxrwxrwx. 1 root root 3 2015-12-08 04:46 /usr/bin/cc -> gcc


編譯前請確認是否有安裝 rdma 跟libibverbs 兩個類別庫,也請確認你的server網卡有支援RDMA。

編譯

client :

$ cc -o client client.c -lrdmacm -libverbs

server :

$ cc -o server server.c -lrdmacm 


執行
server :

$ ./server

client : (格式為  client <servername> <val1> <val2>)

$./client  192.168.1.2  123 567 

123 + 567 = 690



程式碼:

https://github.com/linzion/RDMA-example-application


參考:
http://www.digitalvampire.org/rdma-tutorial-2007/notes.pdf



ibv_wc defined in <infiniband/verbs.h>.
struct ibv_wc {
        uint64_t                wr_id;          /* ID of the completed Work Request (WR) */
        enum ibv_wc_status      status;         /* Status of the operation */
        enum ibv_wc_opcode      opcode;         /* Operation type specified in the completed WR */
        uint32_t                vendor_err;     /* Vendor error syndrome */
        uint32_t                byte_len;       /* Number of bytes transferred */
        uint32_t                imm_data;       /* Immediate data (in network byte order) */
        uint32_t                qp_num;         /* Local QP number of completed WR */
        uint32_t                src_qp;         /* Source QP number (remote QP number) of completed WR (valid only for UD QPs) */
        int                     wc_flags;       /* Flags of the completed WR */
        uint16_t                pkey_index;     /* P_Key index (valid only for GSI QPs) */
        uint16_t                slid;           /* Source LID */
        uint8_t                 sl;             /* Service Level */
        uint8_t                 dlid_path_bits; /* DLID path bits (not applicable for multicast messages) */
};

c compiler error : 程式中有游離的 「 」

c compiler error :  程式中有游離的 「    」


錯誤:程式中有游離的 「\342」
錯誤:程式中有游離的 「\210」
錯誤:程式中有游離的 「\222」

c compiler 時有時會出現這種錯誤,原因是因為程式碼中含有 全形符號  並非只有特殊的符號爾以,以上例來說是 「-」這個全形dash ,只要換成半形就可以了。

檢查出現在哪裡的方法是:

od -c code.c > log.txt

對照一下就很容易發現是什麼全形符號 在作怪了。


參考:
http://blog.csdn.net/sdustliyang/article/details/6851464

2016年4月27日 星期三

Linux Nvidia Driver Install

Linux Nvidia Driver Install

在安裝前請確定電腦是否是有獨立顯卡,或者你的電腦是擁有內顯與雙顯卡的筆電,這邊的安裝僅適用於電腦僅有一張獨立顯卡電腦使用,否則會遇到安裝完重登入後會黑畫面也無法進入桌面,但是可以切到其他tty的情況,非單張獨立顯卡請參考此。


1.先去官方網站選擇屬於你的顯卡型號,下載run檔。

     wget http://......                      

2.安裝.run 檔前要先關掉X server,依照不同桌面環境去關掉對應的服務。

  • GNOME:  (Cent OS 通常是這個與 ubuntu 舊版 )
          sudo service  gdm stop   
  • KDE :
          sudo service  kdm stop    
  • UNITY:  (Ubuntu 14.04 以後 )
          sudo service  lightdm stop
  • 特殊 :  (重開使用 sudo start prefdm)
         sudo stop prefdm              

3.執行.run 檔,按照步驟即可。

     sudo  sh  NVIDIA-Linux-x86_64-XXX.XX.run



2016年4月20日 星期三

spark 1.6.1 Standalone Mode install 最簡易部屬


spark 1.6.1  Standalone Mode install 

我們準備兩台 ubuntu 14.04的機器,並在上面建置 spark 1.6.1 Standalone Mode,最簡易部屬,假設第一台電腦 ip為192.168.1.3,第二台電腦 ip為192.168.1.4

1.從官網下載spark 1.6.1 ,package type 選擇: Pre-built for Hadoop 2.6 and later, 點 spark-1.6.1-bin-hadoop2.6.tgz  下載 並壓縮。

$ wget http://apache.stu.edu.tw/spark/spark-1.6.1/spark-1.6.1-bin-hadoop2.6.tgz 
$ tar zxvf spark-1.6.1-bin-hadoop2.6.tgz

2. 在每台電腦安裝 java


$ sudo apt-get install openjdk-7-jdk



3. 在每台電腦家目錄 編輯 .bashrc,於最後檔案最後加上java路徑。
$vi  ~/.bashrc


....
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export PATH=$JAVA_HOME/bin:$PATH

檢測是否成功:


$java -version

4.在每台電腦 編輯 


$ vi /etc/hosts  


192.168.1.3     ubuntu1
192.168.1.4     ubuntu2

5.建立加密連線不需要密碼
$ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$scp -r  ubuntu@192.168.1.3:~/.ssh ~


6.將ssh連線時詢問關掉。
$sudo vi /etc/ssh/ssh_config


 StrictHostKeyChecking no
為了關掉下面訊息。
ECDSA key fingerprint is 7e:21:58:85:c1:bb:4b:20:c8:60:7f:89:7f:3e:8f:15.
Are you sure you want to continue connecting (yes/no)?

7.設定 spark-env.sh
$ cd  /spark-1.6.1-bin-hadoop2.6/conf
$ mv spark-env.sh.template  spark-env.sh
$ vi spark-env.sh


.....
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
.....
7.設定 slaves
$mv slaves.template  slaves
$ vi slaves



#localhost
 ubuntu1
 ubuntu2


8.在spark目錄下 啟動Master 跟 slave
$sbin/start-all.sh


9.觀看是否已經啟動。


http://192.168.1.3:8080/





10.執行測試程式



$./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://ubuntu1:7077 \
  ./lib/spark-examples-1.6.1-hadoop2.6.0.jar \
  10


以下為用相同過程建置出的環境:


 ✘ paslab@fastnet  ~/zino/spark-1.5.1-bin-hadoop2.6  ./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://fastnet:7077 \
  --executor-memory 20G \
  --total-executor-cores 100 \
  ./lib/spark-examples-1.5.1-hadoop2.6.0.jar \
  10
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/04/22 15:57:06 INFO SparkContext: Running Spark version 1.5.1
16/04/22 15:57:06 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/04/22 15:57:06 INFO SecurityManager: Changing view acls to: paslab
16/04/22 15:57:06 INFO SecurityManager: Changing modify acls to: paslab
16/04/22 15:57:06 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(paslab); users with modify permissions: Set(paslab)
16/04/22 15:57:06 INFO Slf4jLogger: Slf4jLogger started
16/04/22 15:57:06 INFO Remoting: Starting remoting
16/04/22 15:57:06 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.1.2:45824]
16/04/22 15:57:06 INFO Utils: Successfully started service 'sparkDriver' on port 45824.
16/04/22 15:57:06 INFO SparkEnv: Registering MapOutputTracker
16/04/22 15:57:06 INFO SparkEnv: Registering BlockManagerMaster
16/04/22 15:57:06 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-654fc5a5-2c10-4cf9-90d0-aaa46dd6259d
16/04/22 15:57:07 INFO MemoryStore: MemoryStore started with capacity 530.3 MB
16/04/22 15:57:07 INFO HttpFileServer: HTTP File server directory is /tmp/spark-aea15ac1-2dfa-445a-aef4-1859becb1ee6/httpd-73c0986a-57f6-4b0b-9fc4-9f84366393c9
16/04/22 15:57:07 INFO HttpServer: Starting HTTP Server
16/04/22 15:57:07 INFO Utils: Successfully started service 'HTTP file server' on port 35012.
16/04/22 15:57:07 INFO SparkEnv: Registering OutputCommitCoordinator
16/04/22 15:57:07 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/04/22 15:57:07 INFO SparkUI: Started SparkUI at http://192.168.1.2:4040
16/04/22 15:57:07 INFO SparkContext: Added JAR file:/home/paslab/zino/spark-1.5.1-bin-hadoop2.6/./lib/spark-examples-1.5.1-hadoop2.6.0.jar at http://192.168.1.2:35012/jars/spark-examples-1.5.1-hadoop2.6.0.jar with timestamp 1461311827975
16/04/22 15:57:08 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Connecting to master spark://fastnet:7077...
16/04/22 15:57:08 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160422155708-0000
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor added: app-20160422155708-0000/0 on worker-20160422155620-192.168.1.2-43391 (192.168.1.2:43391) with 12 cores
16/04/22 15:57:08 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160422155708-0000/0 on hostPort 192.168.1.2:43391 with 12 cores, 20.0 GB RAM
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor added: app-20160422155708-0000/1 on worker-20160422155623-192.168.1.3-47961 (192.168.1.3:47961) with 12 cores
16/04/22 15:57:08 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160422155708-0000/1 on hostPort 192.168.1.3:47961 with 12 cores, 20.0 GB RAM
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor updated: app-20160422155708-0000/0 is now RUNNING
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor updated: app-20160422155708-0000/1 is now RUNNING
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor updated: app-20160422155708-0000/0 is now LOADING
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor updated: app-20160422155708-0000/1 is now LOADING
16/04/22 15:57:08 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46316.
16/04/22 15:57:08 INFO NettyBlockTransferService: Server created on 46316
16/04/22 15:57:08 INFO BlockManagerMaster: Trying to register BlockManager
16/04/22 15:57:08 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.2:46316 with 530.3 MB RAM, BlockManagerId(driver, 192.168.1.2, 46316)
16/04/22 15:57:08 INFO BlockManagerMaster: Registered BlockManager
16/04/22 15:57:08 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/04/22 15:57:08 INFO SparkContext: Starting job: reduce at SparkPi.scala:36
16/04/22 15:57:08 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:36) with 10 output partitions
16/04/22 15:57:08 INFO DAGScheduler: Final stage: ResultStage 0(reduce at SparkPi.scala:36)
16/04/22 15:57:08 INFO DAGScheduler: Parents of final stage: List()
16/04/22 15:57:08 INFO DAGScheduler: Missing parents: List()
16/04/22 15:57:08 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32), which has no missing parents
16/04/22 15:57:08 INFO MemoryStore: ensureFreeSpace(1888) called with curMem=0, maxMem=556038881
16/04/22 15:57:08 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1888.0 B, free 530.3 MB)
16/04/22 15:57:08 INFO MemoryStore: ensureFreeSpace(1202) called with curMem=1888, maxMem=556038881
16/04/22 15:57:08 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1202.0 B, free 530.3 MB)
16/04/22 15:57:08 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.2:46316 (size: 1202.0 B, free: 530.3 MB)
16/04/22 15:57:08 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861
16/04/22 15:57:08 INFO DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32)
16/04/22 15:57:08 INFO TaskSchedulerImpl: Adding task set 0.0 with 10 tasks
16/04/22 15:57:10 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.1.2:50829/user/Executor#1430081976]) with ID 0
16/04/22 15:57:10 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.1.3:45907/user/Executor#-348604070]) with ID 1
16/04/22 15:57:10 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.2:35234 with 10.4 GB RAM, BlockManagerId(0, 192.168.1.2, 35234)
16/04/22 15:57:10 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.3:36667 with 10.4 GB RAM, BlockManagerId(1, 192.168.1.3, 36667)
16/04/22 15:57:11 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.2:35234 (size: 1202.0 B, free: 10.4 GB)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 1057 ms on 192.168.1.2 (1/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 1060 ms on 192.168.1.2 (2/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 1120 ms on 192.168.1.2 (3/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 1124 ms on 192.168.1.2 (4/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1161 ms on 192.168.1.2 (5/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 1146 ms on 192.168.1.2 (6/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 1142 ms on 192.168.1.2 (7/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 1149 ms on 192.168.1.2 (8/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 1148 ms on 192.168.1.2 (9/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 1149 ms on 192.168.1.2 (10/10)
16/04/22 15:57:11 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/04/22 15:57:11 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:36) finished in 2.419 s
16/04/22 15:57:11 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:36, took 2.622159 s
Pi is roughly 3.144648
16/04/22 15:57:11 INFO SparkUI: Stopped Spark web UI at http://192.168.1.2:4040
16/04/22 15:57:11 INFO DAGScheduler: Stopping DAGScheduler
16/04/22 15:57:11 INFO SparkDeploySchedulerBackend: Shutting down all executors
16/04/22 15:57:11 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
16/04/22 15:57:11 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/04/22 15:57:11 INFO MemoryStore: MemoryStore cleared
16/04/22 15:57:11 INFO BlockManager: BlockManager stopped
16/04/22 15:57:11 INFO BlockManagerMaster: BlockManagerMaster stopped
16/04/22 15:57:11 INFO SparkContext: Successfully stopped SparkContext
16/04/22 15:57:11 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/04/22 15:57:11 INFO ShutdownHookManager: Shutdown hook called
16/04/22 15:57:11 INFO ShutdownHookManager: Deleting directory /tmp/spark-aea15ac1-2dfa-445a-aef4-1859becb1ee6


2016年4月17日 星期日

ASUS K43SD 筆電過熱 解決

ASUS K43SD 筆電過熱  解決


K43SD 這台筆電買了3年半了,連開個crome 也會過熱,風扇一直猛轉但是CPU 溫度還是高達99度,有時候甚至會熱到自動關機,我也試過清裸露出來的風扇(風骨因為有貼膜貼住了不敢清),甚至連同銅架都猜下來從上善熱膏,其實善熱膏有用,讓一開始的平均溫度重 70 降到 50度,但是只要一開軟體crome 滑個FB 就溫度飆到100度,就算買了1000多塊的散熱墊,最後一怒之下直接去買風扇+銅骨架(要換記得要上散熱膏)的零件後,把整個風扇跟風骨的膜 撕開 才發現整個風骨都被灰塵堵住了!!!!

^ 風骨(其實我不知道怎麼稱)後面都是超厚灰塵。

^ 正常不執行任何軟體時,正常溫度 應該是 44~ 60度之間,超過就真的要清了!


總結來說最影響筆電溫度的: 風扇跟風骨不要有灰塵!善熱膏老舊要重上基本上就可以解決了!

最後總結一下,善熱膏大概150、風扇風骨350、至少3小時(慢慢拆怕壞掉)的拆換時間 。真的不是很划算,所以建議還是不要自己處理花個500去給原廠清真的比較划算另外...散熱墊真的無法治根阿!! 除非你是在玩大型遊戲倒是可以讓你至少不要100度可以維持在97度。


另外提醒 k43sd在分離鍵盤那面的機殼時! 光碟機下面有兩個超微小螺絲!!  一定要拆掉 不然應拆一定壞。

2016年4月12日 星期二

ASUS zenbook Arch GNOME Multitouch gestures use touchegg

ASUS zenbook Arch  GNOME Multitouch gestures  use  touchegg


由於多指手勢被gnume 阻擋 [註1] /etc/X11/xorg.conf.d/50-synaptics.conf的設定也會失效, 再從touchegg 原始碼所理解是直接與底層驅動存取,此解法比起抽掉 GNOME 監聽touchpad 功能後重從編譯好很多  如:這裡的方法

1. Install  elantech-asustouchpad-dkms 4.0.2-1


2.Install xf86-input-synaptics and, from AUR, touchegg and touchegg-gce-git


    touchegg-gce-git為touchegg 的GUI設定工具。

3.Turn off touchpad: settings ->Mouse & Touchpad ->touchpad Off





4.新增 /etc/X11/xorg.conf.d/50-synaptics.conf


Section "InputClass"
        Identifier "touchpad catchall"
        Driver "synaptics"
        MatchIsTouchpad "on"
        Option "TapButton1" "1"
        Option "TapButton2" "0"
        Option "TapButton3" "0"
        Option "ClickFinger2" "0"
        Option "ClickFinger3" "0"

# This option is recommend on all Linux systems using evdev, but cannot be
# enabled by default. See the following link for details:
# http://who-t.blogspot.com/2010/11/how-to-meta:ignore-configuration-errors.html
        MatchDevicePath "/dev/input/event*"


EndSection
可參考各型號 driver 設定雖然此葉面已經過有點時:
https://code.google.com/archive/p/touchegg/wikis/ConfigureDevices.wiki

參數設定: https://wiki.archlinux.org/index.php/Touchpad_Synaptics

5.要再進入桌面前啟動:

vi      /etc/profile.d/touchegg.sh


#!/bin/bash
touchegg  &  > /dev/null

6.reboot




Some of the content is thus to Multitouch gestures with the Dell XPS 13 on Arch Linux.
[註1]

xorg.conf.d/50-synaptics.conf does not seem to apply under GNOME and MATE

GNOME and MATE, by default, will overwrite various options for your touch-pad. This includes configurable features for which there is no graphical configuration within GNOME's system control panel. This may cause it to appear that /etc/X11/xorg.conf.d/50-synaptics.conf is not applied. Please refer to the GNOME section in this article to prevent this behavior.


可能有關係:


Console tools

  • Synclient (Recommended) — command line utility to configure and query Synaptics driver settings on a live system, the tool is developed by the synaptics driver maintainers and is provided with the synaptics driver
http://xorg.freedesktop.org/ || xf86-input-synaptics
  • xinput — small general-purpose CLI tool to configure devices
http://xorg.freedesktop.org/ || xorg-xinput

Graphical tools

  • GPointing Device Settings — provides graphical on the fly configuration for several pointing devices connected to the system, including your synaptics touch pad. This application replaces GSynaptics as the preferred tool for graphical touchpad configuration through the synaptics driver
https://wiki.gnome.org/Attic/GPointingDeviceSettings || gpointing-device-settings



參考:
Package Details: elantech-asustouchpad-dkms:https://aur.archlinux.org/packages/elantech-asustouchpad-dkms/
arch wiki Touchpad Synaptics:https://wiki.archlinux.org/index.php/Touchpad_Synaptics
arch wiki Touchegg:https://wiki.archlinux.org/index.php/Touchegg
Multitouch gestures with the Dell XPS 13 on Arch Linux:https://hroy.eu/tips/dell-xps-13-touchpad/

2016年4月11日 星期一

Ubuntu GNOME 14.04 touchpade setting miss

Ubuntu GNOME 14.04  touchpade  setting miss


安裝好Ubuntu GNOME 14.04後卻發現觸控版兩指以上操作都沒有辦法執行.
很像網路上ASUS  zenbook  安裝uubntu 14.04 kernel 3.19 以前都有可能這個問題,目前網路上已知就是UX303LN與UX303UB。
已經有人發布修正此問題在 Pilot6

sudo add-apt-repository ppa:hanipouspilot/focaltech-dkms
sudo apt-get update
sudo apt-get install focaltech-dkms


https://launchpad.net/~hanipouspilot/+archive/ubuntu/focaltech-dkms
http://ubuntuforums.org/showthread.php?t=2253069&page=10
http://askubuntu.com/questions/609228/asus-x750ja-and-ubuntu-gnome-14-04

2016年4月10日 星期日

ubuntu 14.04 NVIDA 驅動 安裝 (ASUS UX303UB) 解決 重開機黑屏問題

ubuntu 14.04 nvidia driver install


測試環境為 ASUS UX303UB GEFORCE 940M,當使用 ubuntu 內建驅動輔助安裝程式安裝nvidia 352 後無法登入或者黑屏,nvidia 官方會叫你下載364的.run驅動,安裝後也是黑屏會一直跳出intel i915 mismatch in base.adjusted_mode.crtc_clock 錯誤,查了一下ubuntu官網最後相容於 GT940M 並有發布的版本就是352,依照此文章內容就可以解決了,ubuntu GNOME 也適用。

Bumblebee 是一個將nvidia  Optimus 技術移植的開源專案,主要是在筆電上自動調整nvidia GPU 與內建intel GPU 的使用,達到省電的效果,所以也有解決雙顯卡衝突問題。
 


I don't own the rights to the source article.
No copyright infringement intended.


參考:http://askubuntu.com/questions/689724/bumblebee-on-asus-zenbook-ux303lb