顯示具有 ubuntu 標籤的文章。 顯示所有文章
顯示具有 ubuntu 標籤的文章。 顯示所有文章

2018年2月26日 星期一

[python3] ubuntu pymongo RuntimeError: 'list' must be None or a list, not

RuntimeError: 'list' must be None or a list, not <class 'str'>

error


from pymongo import MongoClient
Traceback (most recent call last):

  File "<ipython-input-1-214d66698402>", line 1, in <module>
    from pymongo import MongoClient

  File "/usr/local/lib/python3.5/dist-packages/pymongo/__init__.py", line 77, in <module>
    from pymongo.collection import ReturnDocument

  File "/usr/local/lib/python3.5/dist-packages/pymongo/collection.py", line 29, in <module>
    from pymongo import (common,

  File "/usr/local/lib/python3.5/dist-packages/pymongo/common.py", line 28, in <module>
    from pymongo.auth import MECHANISMS

  File "/usr/local/lib/python3.5/dist-packages/pymongo/auth.py", line 120, in <module>
    from backports.pbkdf2 import pbkdf2_hmac

  File "<frozen importlib._bootstrap>", line 969, in _find_and_load

  File "<frozen importlib._bootstrap>", line 954, in _find_and_load_unlocked

  File "<frozen importlib._bootstrap>", line 892, in _find_spec

  File "<frozen importlib._bootstrap>", line 873, in _find_spec_legacy

  File "/usr/local/lib/python3.5/dist-packages/pyximport/pyximport.py", line 253, in find_module
    fp, pathname, (ext,mode,ty) = imp.find_module(fullname,package_path)

  File "/usr/lib/python3.5/imp.py", line 270, in find_module
    "not {}".format(type(name)))

RuntimeError: 'list' must be None or a list, not <class 'str'>


solve

Package "backports.pbkdf2 " is  dependence package on pymongo.  Offical website alse show this Info.
So install  backports.pbkdf2 will solve.

sudo pip3 install backports.pbkdf2

2016年8月24日 星期三

Ubuntu 16.04 compile latest Kernel

Ubuntu 16.04 編譯最新 Kernel 與安裝

在ubuntu 編譯與安裝kernel是比較簡單的,甚至連grub都會幫你安裝好

1. 抓最新的kernel版本(文章發表時版本為:4.8.0-rc3+)

$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git


2.將資料夾複製到 /usr/src

$ sudo mv linux-2.6/ /usr/src/


3.安裝編譯需要的套件

$ sudo apt-get install  ncurses ncurses-devel kernel-package libncurses5-dev libc6-dev libssl-dev modutils build-essential

libncurses5-dev  是為了可以使用 menuconfig 指令
libssl-dev  是編譯時需要


4. 進到切換到root 進到/usr/src/linux-2.6/

$ sudo su -
$ cd /usr/src/linux-2.6/


5.複製現有已經可以使用的.config

$cp /boot/config-4.4.0-34-generic ./.config

/boot/config-4.4.0-34-generic 資料夾會根據你的kernel不同不一樣


6.使用menuconfig 設定

make menuconfig
可以先load剛剛複製的.config檔在自己做喜好設定,完成後save。


7.編譯

$ fakeroot make-kpkg -j 4 --initrd kernel_image kernel_headers


8.安裝

$ dpkg -i linux-image-4.8.0-rc3+_4.8.0-rc3+-10.00.Custom_amd64.deb
$ dpkg -i linux-headers-4.8.0-rc3+_4.8.0-rc3+-10.00.Custom_amd64.deb

如果怕grub沒有幫你更新的化可以使用update-grub
$ update-grub

注意!安裝時/boot 要有足夠的硬碟空間,至少256M會比較保險!
如果電腦同時間有window要注意/boot/efi的位置!


9.重開機

$ reboot

應該預設的ubuntu第一個選項就是最新安裝好的kernel,但如果沒有可以到進階選項進入選擇不同kernel開機,如果編輯失敗也可以在進階選項回到原本正常的kernel

可以正常開機就恭喜你了!



2016年8月18日 星期四

ubuntu 14.04 TLS install VNC server with gnome desketop (遠端桌面)

ubuntu 14.04 TLS   install  VNC server  with gnome desketop (遠端桌面)

1.安裝 gnome 相關套件

# sudo apt-get install gnome-panel gnome-settings-daemon metacity nautilus gnome-terminal # apt-get install gnome-panel gnome-settings-daemon metacity nautilus gnome-terminal vnc4server 

2.啟動一次 再家目錄建立預設資料夾

user@ubuntu:~$ vncserver

You will require a password to access your desktops.

Password:
Verify:
xauth:  file /home/user/.Xauthority does not exist

New 'ubuntu:1 (user)' desktop is ubuntu:1

Creating default startup script /home/user/.vnc/xstartup
Starting applications specified in /home/user/.vnc/xstartup
Log file is /home/user/.vnc/ubuntu:1.log


3.編輯 /home/user/.vnc/xstartup 檔套用以下設定

#!/bin/sh

export XKL_XMODMAP_DISABLE=1
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS

[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &

gnome-panel &
gnome-settings-daemon &
metacity &
nautilus &
gnome-terminal &

4.再次啟動
user@ubuntu:~$ vncserver -kill :1
Killing Xvnc4 process ID 45058
user@ubuntu:~$ vncserver 

New 'ubuntu:1 (user)' desktop is ubuntu:1

Starting applications specified in /home/user/.vnc/xstartup
Log file is /home/user/.vnc/ubuntu:1.log


5.在別台機器啟動 Remmina 遠端桌面客戶端




6.新增並輸入 IP:Port 帳號 密碼




7.連線成功!!



參考來源:http://askubuntu.com/questions/475023/how-to-make-vnc-server-work-with-ubuntu-desktop-without-xfce

參考:https://docs.microsoft.com/zh-tw/azure/virtual-machines/linux/classic/remote-desktop


https://www.digitalocean.com/community/tutorials/how-to-set-up-vnc-server-on-debian-8

https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-vnc-on-ubuntu-14-04


/etc/vnc.conf

vi /etc/alternatives/vncserver  -localhost



在localhost 監聽

vncserver  -localhost

ssh forward
ssh -L 5901:127.0.0.1:5901 -N -f -l user server_ip_address

移除ssh forward
If you are using Linux you can kill the process by:
ps aux | grep ssh
and then use
kill <id>
To kill the process.
If the kill command is not successfull you can try
kill -9 <id>

來源:https://superuser.com/questions/87014/how-do-i-remove-an-ssh-forwarded-port


移除 監聽

 vncserver -kill :2

2016年4月27日 星期三

Linux Nvidia Driver Install

Linux Nvidia Driver Install

在安裝前請確定電腦是否是有獨立顯卡,或者你的電腦是擁有內顯與雙顯卡的筆電,這邊的安裝僅適用於電腦僅有一張獨立顯卡電腦使用,否則會遇到安裝完重登入後會黑畫面也無法進入桌面,但是可以切到其他tty的情況,非單張獨立顯卡請參考此。


1.先去官方網站選擇屬於你的顯卡型號,下載run檔。

     wget http://......                      

2.安裝.run 檔前要先關掉X server,依照不同桌面環境去關掉對應的服務。

  • GNOME:  (Cent OS 通常是這個與 ubuntu 舊版 )
          sudo service  gdm stop   
  • KDE :
          sudo service  kdm stop    
  • UNITY:  (Ubuntu 14.04 以後 )
          sudo service  lightdm stop
  • 特殊 :  (重開使用 sudo start prefdm)
         sudo stop prefdm              

3.執行.run 檔,按照步驟即可。

     sudo  sh  NVIDIA-Linux-x86_64-XXX.XX.run



2016年4月20日 星期三

spark 1.6.1 Standalone Mode install 最簡易部屬


spark 1.6.1  Standalone Mode install 

我們準備兩台 ubuntu 14.04的機器,並在上面建置 spark 1.6.1 Standalone Mode,最簡易部屬,假設第一台電腦 ip為192.168.1.3,第二台電腦 ip為192.168.1.4

1.從官網下載spark 1.6.1 ,package type 選擇: Pre-built for Hadoop 2.6 and later, 點 spark-1.6.1-bin-hadoop2.6.tgz  下載 並壓縮。

$ wget http://apache.stu.edu.tw/spark/spark-1.6.1/spark-1.6.1-bin-hadoop2.6.tgz 
$ tar zxvf spark-1.6.1-bin-hadoop2.6.tgz

2. 在每台電腦安裝 java


$ sudo apt-get install openjdk-7-jdk



3. 在每台電腦家目錄 編輯 .bashrc,於最後檔案最後加上java路徑。
$vi  ~/.bashrc


....
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export PATH=$JAVA_HOME/bin:$PATH

檢測是否成功:


$java -version

4.在每台電腦 編輯 


$ vi /etc/hosts  


192.168.1.3     ubuntu1
192.168.1.4     ubuntu2

5.建立加密連線不需要密碼
$ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$scp -r  ubuntu@192.168.1.3:~/.ssh ~


6.將ssh連線時詢問關掉。
$sudo vi /etc/ssh/ssh_config


 StrictHostKeyChecking no
為了關掉下面訊息。
ECDSA key fingerprint is 7e:21:58:85:c1:bb:4b:20:c8:60:7f:89:7f:3e:8f:15.
Are you sure you want to continue connecting (yes/no)?

7.設定 spark-env.sh
$ cd  /spark-1.6.1-bin-hadoop2.6/conf
$ mv spark-env.sh.template  spark-env.sh
$ vi spark-env.sh


.....
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
.....
7.設定 slaves
$mv slaves.template  slaves
$ vi slaves



#localhost
 ubuntu1
 ubuntu2


8.在spark目錄下 啟動Master 跟 slave
$sbin/start-all.sh


9.觀看是否已經啟動。


http://192.168.1.3:8080/





10.執行測試程式



$./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://ubuntu1:7077 \
  ./lib/spark-examples-1.6.1-hadoop2.6.0.jar \
  10


以下為用相同過程建置出的環境:


 ✘ paslab@fastnet  ~/zino/spark-1.5.1-bin-hadoop2.6  ./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://fastnet:7077 \
  --executor-memory 20G \
  --total-executor-cores 100 \
  ./lib/spark-examples-1.5.1-hadoop2.6.0.jar \
  10
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/04/22 15:57:06 INFO SparkContext: Running Spark version 1.5.1
16/04/22 15:57:06 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/04/22 15:57:06 INFO SecurityManager: Changing view acls to: paslab
16/04/22 15:57:06 INFO SecurityManager: Changing modify acls to: paslab
16/04/22 15:57:06 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(paslab); users with modify permissions: Set(paslab)
16/04/22 15:57:06 INFO Slf4jLogger: Slf4jLogger started
16/04/22 15:57:06 INFO Remoting: Starting remoting
16/04/22 15:57:06 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.1.2:45824]
16/04/22 15:57:06 INFO Utils: Successfully started service 'sparkDriver' on port 45824.
16/04/22 15:57:06 INFO SparkEnv: Registering MapOutputTracker
16/04/22 15:57:06 INFO SparkEnv: Registering BlockManagerMaster
16/04/22 15:57:06 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-654fc5a5-2c10-4cf9-90d0-aaa46dd6259d
16/04/22 15:57:07 INFO MemoryStore: MemoryStore started with capacity 530.3 MB
16/04/22 15:57:07 INFO HttpFileServer: HTTP File server directory is /tmp/spark-aea15ac1-2dfa-445a-aef4-1859becb1ee6/httpd-73c0986a-57f6-4b0b-9fc4-9f84366393c9
16/04/22 15:57:07 INFO HttpServer: Starting HTTP Server
16/04/22 15:57:07 INFO Utils: Successfully started service 'HTTP file server' on port 35012.
16/04/22 15:57:07 INFO SparkEnv: Registering OutputCommitCoordinator
16/04/22 15:57:07 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/04/22 15:57:07 INFO SparkUI: Started SparkUI at http://192.168.1.2:4040
16/04/22 15:57:07 INFO SparkContext: Added JAR file:/home/paslab/zino/spark-1.5.1-bin-hadoop2.6/./lib/spark-examples-1.5.1-hadoop2.6.0.jar at http://192.168.1.2:35012/jars/spark-examples-1.5.1-hadoop2.6.0.jar with timestamp 1461311827975
16/04/22 15:57:08 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Connecting to master spark://fastnet:7077...
16/04/22 15:57:08 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160422155708-0000
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor added: app-20160422155708-0000/0 on worker-20160422155620-192.168.1.2-43391 (192.168.1.2:43391) with 12 cores
16/04/22 15:57:08 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160422155708-0000/0 on hostPort 192.168.1.2:43391 with 12 cores, 20.0 GB RAM
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor added: app-20160422155708-0000/1 on worker-20160422155623-192.168.1.3-47961 (192.168.1.3:47961) with 12 cores
16/04/22 15:57:08 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160422155708-0000/1 on hostPort 192.168.1.3:47961 with 12 cores, 20.0 GB RAM
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor updated: app-20160422155708-0000/0 is now RUNNING
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor updated: app-20160422155708-0000/1 is now RUNNING
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor updated: app-20160422155708-0000/0 is now LOADING
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor updated: app-20160422155708-0000/1 is now LOADING
16/04/22 15:57:08 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46316.
16/04/22 15:57:08 INFO NettyBlockTransferService: Server created on 46316
16/04/22 15:57:08 INFO BlockManagerMaster: Trying to register BlockManager
16/04/22 15:57:08 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.2:46316 with 530.3 MB RAM, BlockManagerId(driver, 192.168.1.2, 46316)
16/04/22 15:57:08 INFO BlockManagerMaster: Registered BlockManager
16/04/22 15:57:08 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/04/22 15:57:08 INFO SparkContext: Starting job: reduce at SparkPi.scala:36
16/04/22 15:57:08 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:36) with 10 output partitions
16/04/22 15:57:08 INFO DAGScheduler: Final stage: ResultStage 0(reduce at SparkPi.scala:36)
16/04/22 15:57:08 INFO DAGScheduler: Parents of final stage: List()
16/04/22 15:57:08 INFO DAGScheduler: Missing parents: List()
16/04/22 15:57:08 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32), which has no missing parents
16/04/22 15:57:08 INFO MemoryStore: ensureFreeSpace(1888) called with curMem=0, maxMem=556038881
16/04/22 15:57:08 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1888.0 B, free 530.3 MB)
16/04/22 15:57:08 INFO MemoryStore: ensureFreeSpace(1202) called with curMem=1888, maxMem=556038881
16/04/22 15:57:08 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1202.0 B, free 530.3 MB)
16/04/22 15:57:08 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.2:46316 (size: 1202.0 B, free: 530.3 MB)
16/04/22 15:57:08 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861
16/04/22 15:57:08 INFO DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32)
16/04/22 15:57:08 INFO TaskSchedulerImpl: Adding task set 0.0 with 10 tasks
16/04/22 15:57:10 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.1.2:50829/user/Executor#1430081976]) with ID 0
16/04/22 15:57:10 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.1.3:45907/user/Executor#-348604070]) with ID 1
16/04/22 15:57:10 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.2:35234 with 10.4 GB RAM, BlockManagerId(0, 192.168.1.2, 35234)
16/04/22 15:57:10 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.3:36667 with 10.4 GB RAM, BlockManagerId(1, 192.168.1.3, 36667)
16/04/22 15:57:11 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.2:35234 (size: 1202.0 B, free: 10.4 GB)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 1057 ms on 192.168.1.2 (1/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 1060 ms on 192.168.1.2 (2/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 1120 ms on 192.168.1.2 (3/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 1124 ms on 192.168.1.2 (4/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1161 ms on 192.168.1.2 (5/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 1146 ms on 192.168.1.2 (6/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 1142 ms on 192.168.1.2 (7/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 1149 ms on 192.168.1.2 (8/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 1148 ms on 192.168.1.2 (9/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 1149 ms on 192.168.1.2 (10/10)
16/04/22 15:57:11 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/04/22 15:57:11 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:36) finished in 2.419 s
16/04/22 15:57:11 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:36, took 2.622159 s
Pi is roughly 3.144648
16/04/22 15:57:11 INFO SparkUI: Stopped Spark web UI at http://192.168.1.2:4040
16/04/22 15:57:11 INFO DAGScheduler: Stopping DAGScheduler
16/04/22 15:57:11 INFO SparkDeploySchedulerBackend: Shutting down all executors
16/04/22 15:57:11 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
16/04/22 15:57:11 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/04/22 15:57:11 INFO MemoryStore: MemoryStore cleared
16/04/22 15:57:11 INFO BlockManager: BlockManager stopped
16/04/22 15:57:11 INFO BlockManagerMaster: BlockManagerMaster stopped
16/04/22 15:57:11 INFO SparkContext: Successfully stopped SparkContext
16/04/22 15:57:11 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/04/22 15:57:11 INFO ShutdownHookManager: Shutdown hook called
16/04/22 15:57:11 INFO ShutdownHookManager: Deleting directory /tmp/spark-aea15ac1-2dfa-445a-aef4-1859becb1ee6


2016年4月11日 星期一

Ubuntu GNOME 14.04 touchpade setting miss

Ubuntu GNOME 14.04  touchpade  setting miss


安裝好Ubuntu GNOME 14.04後卻發現觸控版兩指以上操作都沒有辦法執行.
很像網路上ASUS  zenbook  安裝uubntu 14.04 kernel 3.19 以前都有可能這個問題,目前網路上已知就是UX303LN與UX303UB。
已經有人發布修正此問題在 Pilot6

sudo add-apt-repository ppa:hanipouspilot/focaltech-dkms
sudo apt-get update
sudo apt-get install focaltech-dkms


https://launchpad.net/~hanipouspilot/+archive/ubuntu/focaltech-dkms
http://ubuntuforums.org/showthread.php?t=2253069&page=10
http://askubuntu.com/questions/609228/asus-x750ja-and-ubuntu-gnome-14-04

2016年4月10日 星期日

ubuntu 14.04 NVIDA 驅動 安裝 (ASUS UX303UB) 解決 重開機黑屏問題

ubuntu 14.04 nvidia driver install


測試環境為 ASUS UX303UB GEFORCE 940M,當使用 ubuntu 內建驅動輔助安裝程式安裝nvidia 352 後無法登入或者黑屏,nvidia 官方會叫你下載364的.run驅動,安裝後也是黑屏會一直跳出intel i915 mismatch in base.adjusted_mode.crtc_clock 錯誤,查了一下ubuntu官網最後相容於 GT940M 並有發布的版本就是352,依照此文章內容就可以解決了,ubuntu GNOME 也適用。

Bumblebee 是一個將nvidia  Optimus 技術移植的開源專案,主要是在筆電上自動調整nvidia GPU 與內建intel GPU 的使用,達到省電的效果,所以也有解決雙顯卡衝突問題。
 


I don't own the rights to the source article.
No copyright infringement intended.


參考:http://askubuntu.com/questions/689724/bumblebee-on-asus-zenbook-ux303lb