2016年4月20日 星期三

spark 1.6.1 Standalone Mode install 最簡易部屬


spark 1.6.1  Standalone Mode install 

我們準備兩台 ubuntu 14.04的機器,並在上面建置 spark 1.6.1 Standalone Mode,最簡易部屬,假設第一台電腦 ip為192.168.1.3,第二台電腦 ip為192.168.1.4

1.從官網下載spark 1.6.1 ,package type 選擇: Pre-built for Hadoop 2.6 and later, 點 spark-1.6.1-bin-hadoop2.6.tgz  下載 並壓縮。

$ wget http://apache.stu.edu.tw/spark/spark-1.6.1/spark-1.6.1-bin-hadoop2.6.tgz 
$ tar zxvf spark-1.6.1-bin-hadoop2.6.tgz

2. 在每台電腦安裝 java


$ sudo apt-get install openjdk-7-jdk



3. 在每台電腦家目錄 編輯 .bashrc,於最後檔案最後加上java路徑。
$vi  ~/.bashrc


....
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export PATH=$JAVA_HOME/bin:$PATH

檢測是否成功:


$java -version

4.在每台電腦 編輯 


$ vi /etc/hosts  


192.168.1.3     ubuntu1
192.168.1.4     ubuntu2

5.建立加密連線不需要密碼
$ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$scp -r  ubuntu@192.168.1.3:~/.ssh ~


6.將ssh連線時詢問關掉。
$sudo vi /etc/ssh/ssh_config


 StrictHostKeyChecking no
為了關掉下面訊息。
ECDSA key fingerprint is 7e:21:58:85:c1:bb:4b:20:c8:60:7f:89:7f:3e:8f:15.
Are you sure you want to continue connecting (yes/no)?

7.設定 spark-env.sh
$ cd  /spark-1.6.1-bin-hadoop2.6/conf
$ mv spark-env.sh.template  spark-env.sh
$ vi spark-env.sh


.....
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
.....
7.設定 slaves
$mv slaves.template  slaves
$ vi slaves



#localhost
 ubuntu1
 ubuntu2


8.在spark目錄下 啟動Master 跟 slave
$sbin/start-all.sh


9.觀看是否已經啟動。


http://192.168.1.3:8080/





10.執行測試程式



$./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://ubuntu1:7077 \
  ./lib/spark-examples-1.6.1-hadoop2.6.0.jar \
  10


以下為用相同過程建置出的環境:


 ✘ paslab@fastnet  ~/zino/spark-1.5.1-bin-hadoop2.6  ./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://fastnet:7077 \
  --executor-memory 20G \
  --total-executor-cores 100 \
  ./lib/spark-examples-1.5.1-hadoop2.6.0.jar \
  10
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/04/22 15:57:06 INFO SparkContext: Running Spark version 1.5.1
16/04/22 15:57:06 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/04/22 15:57:06 INFO SecurityManager: Changing view acls to: paslab
16/04/22 15:57:06 INFO SecurityManager: Changing modify acls to: paslab
16/04/22 15:57:06 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(paslab); users with modify permissions: Set(paslab)
16/04/22 15:57:06 INFO Slf4jLogger: Slf4jLogger started
16/04/22 15:57:06 INFO Remoting: Starting remoting
16/04/22 15:57:06 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.1.2:45824]
16/04/22 15:57:06 INFO Utils: Successfully started service 'sparkDriver' on port 45824.
16/04/22 15:57:06 INFO SparkEnv: Registering MapOutputTracker
16/04/22 15:57:06 INFO SparkEnv: Registering BlockManagerMaster
16/04/22 15:57:06 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-654fc5a5-2c10-4cf9-90d0-aaa46dd6259d
16/04/22 15:57:07 INFO MemoryStore: MemoryStore started with capacity 530.3 MB
16/04/22 15:57:07 INFO HttpFileServer: HTTP File server directory is /tmp/spark-aea15ac1-2dfa-445a-aef4-1859becb1ee6/httpd-73c0986a-57f6-4b0b-9fc4-9f84366393c9
16/04/22 15:57:07 INFO HttpServer: Starting HTTP Server
16/04/22 15:57:07 INFO Utils: Successfully started service 'HTTP file server' on port 35012.
16/04/22 15:57:07 INFO SparkEnv: Registering OutputCommitCoordinator
16/04/22 15:57:07 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/04/22 15:57:07 INFO SparkUI: Started SparkUI at http://192.168.1.2:4040
16/04/22 15:57:07 INFO SparkContext: Added JAR file:/home/paslab/zino/spark-1.5.1-bin-hadoop2.6/./lib/spark-examples-1.5.1-hadoop2.6.0.jar at http://192.168.1.2:35012/jars/spark-examples-1.5.1-hadoop2.6.0.jar with timestamp 1461311827975
16/04/22 15:57:08 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Connecting to master spark://fastnet:7077...
16/04/22 15:57:08 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160422155708-0000
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor added: app-20160422155708-0000/0 on worker-20160422155620-192.168.1.2-43391 (192.168.1.2:43391) with 12 cores
16/04/22 15:57:08 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160422155708-0000/0 on hostPort 192.168.1.2:43391 with 12 cores, 20.0 GB RAM
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor added: app-20160422155708-0000/1 on worker-20160422155623-192.168.1.3-47961 (192.168.1.3:47961) with 12 cores
16/04/22 15:57:08 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160422155708-0000/1 on hostPort 192.168.1.3:47961 with 12 cores, 20.0 GB RAM
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor updated: app-20160422155708-0000/0 is now RUNNING
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor updated: app-20160422155708-0000/1 is now RUNNING
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor updated: app-20160422155708-0000/0 is now LOADING
16/04/22 15:57:08 INFO AppClient$ClientEndpoint: Executor updated: app-20160422155708-0000/1 is now LOADING
16/04/22 15:57:08 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46316.
16/04/22 15:57:08 INFO NettyBlockTransferService: Server created on 46316
16/04/22 15:57:08 INFO BlockManagerMaster: Trying to register BlockManager
16/04/22 15:57:08 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.2:46316 with 530.3 MB RAM, BlockManagerId(driver, 192.168.1.2, 46316)
16/04/22 15:57:08 INFO BlockManagerMaster: Registered BlockManager
16/04/22 15:57:08 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/04/22 15:57:08 INFO SparkContext: Starting job: reduce at SparkPi.scala:36
16/04/22 15:57:08 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:36) with 10 output partitions
16/04/22 15:57:08 INFO DAGScheduler: Final stage: ResultStage 0(reduce at SparkPi.scala:36)
16/04/22 15:57:08 INFO DAGScheduler: Parents of final stage: List()
16/04/22 15:57:08 INFO DAGScheduler: Missing parents: List()
16/04/22 15:57:08 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32), which has no missing parents
16/04/22 15:57:08 INFO MemoryStore: ensureFreeSpace(1888) called with curMem=0, maxMem=556038881
16/04/22 15:57:08 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1888.0 B, free 530.3 MB)
16/04/22 15:57:08 INFO MemoryStore: ensureFreeSpace(1202) called with curMem=1888, maxMem=556038881
16/04/22 15:57:08 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1202.0 B, free 530.3 MB)
16/04/22 15:57:08 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.2:46316 (size: 1202.0 B, free: 530.3 MB)
16/04/22 15:57:08 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861
16/04/22 15:57:08 INFO DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32)
16/04/22 15:57:08 INFO TaskSchedulerImpl: Adding task set 0.0 with 10 tasks
16/04/22 15:57:10 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.1.2:50829/user/Executor#1430081976]) with ID 0
16/04/22 15:57:10 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, 192.168.1.2, PROCESS_LOCAL, 2161 bytes)
16/04/22 15:57:10 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.1.3:45907/user/Executor#-348604070]) with ID 1
16/04/22 15:57:10 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.2:35234 with 10.4 GB RAM, BlockManagerId(0, 192.168.1.2, 35234)
16/04/22 15:57:10 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.3:36667 with 10.4 GB RAM, BlockManagerId(1, 192.168.1.3, 36667)
16/04/22 15:57:11 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.2:35234 (size: 1202.0 B, free: 10.4 GB)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 1057 ms on 192.168.1.2 (1/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 1060 ms on 192.168.1.2 (2/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 1120 ms on 192.168.1.2 (3/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 1124 ms on 192.168.1.2 (4/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1161 ms on 192.168.1.2 (5/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 1146 ms on 192.168.1.2 (6/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 1142 ms on 192.168.1.2 (7/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 1149 ms on 192.168.1.2 (8/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 1148 ms on 192.168.1.2 (9/10)
16/04/22 15:57:11 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 1149 ms on 192.168.1.2 (10/10)
16/04/22 15:57:11 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/04/22 15:57:11 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:36) finished in 2.419 s
16/04/22 15:57:11 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:36, took 2.622159 s
Pi is roughly 3.144648
16/04/22 15:57:11 INFO SparkUI: Stopped Spark web UI at http://192.168.1.2:4040
16/04/22 15:57:11 INFO DAGScheduler: Stopping DAGScheduler
16/04/22 15:57:11 INFO SparkDeploySchedulerBackend: Shutting down all executors
16/04/22 15:57:11 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
16/04/22 15:57:11 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/04/22 15:57:11 INFO MemoryStore: MemoryStore cleared
16/04/22 15:57:11 INFO BlockManager: BlockManager stopped
16/04/22 15:57:11 INFO BlockManagerMaster: BlockManagerMaster stopped
16/04/22 15:57:11 INFO SparkContext: Successfully stopped SparkContext
16/04/22 15:57:11 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/04/22 15:57:11 INFO ShutdownHookManager: Shutdown hook called
16/04/22 15:57:11 INFO ShutdownHookManager: Deleting directory /tmp/spark-aea15ac1-2dfa-445a-aef4-1859becb1ee6


2016年4月17日 星期日

ASUS K43SD 筆電過熱 解決

ASUS K43SD 筆電過熱  解決


K43SD 這台筆電買了3年半了,連開個crome 也會過熱,風扇一直猛轉但是CPU 溫度還是高達99度,有時候甚至會熱到自動關機,我也試過清裸露出來的風扇(風骨因為有貼膜貼住了不敢清),甚至連同銅架都猜下來從上善熱膏,其實善熱膏有用,讓一開始的平均溫度重 70 降到 50度,但是只要一開軟體crome 滑個FB 就溫度飆到100度,就算買了1000多塊的散熱墊,最後一怒之下直接去買風扇+銅骨架(要換記得要上散熱膏)的零件後,把整個風扇跟風骨的膜 撕開 才發現整個風骨都被灰塵堵住了!!!!

^ 風骨(其實我不知道怎麼稱)後面都是超厚灰塵。

^ 正常不執行任何軟體時,正常溫度 應該是 44~ 60度之間,超過就真的要清了!


總結來說最影響筆電溫度的: 風扇跟風骨不要有灰塵!善熱膏老舊要重上基本上就可以解決了!

最後總結一下,善熱膏大概150、風扇風骨350、至少3小時(慢慢拆怕壞掉)的拆換時間 。真的不是很划算,所以建議還是不要自己處理花個500去給原廠清真的比較划算另外...散熱墊真的無法治根阿!! 除非你是在玩大型遊戲倒是可以讓你至少不要100度可以維持在97度。


另外提醒 k43sd在分離鍵盤那面的機殼時! 光碟機下面有兩個超微小螺絲!!  一定要拆掉 不然應拆一定壞。

2016年4月12日 星期二

ASUS zenbook Arch GNOME Multitouch gestures use touchegg

ASUS zenbook Arch  GNOME Multitouch gestures  use  touchegg


由於多指手勢被gnume 阻擋 [註1] /etc/X11/xorg.conf.d/50-synaptics.conf的設定也會失效, 再從touchegg 原始碼所理解是直接與底層驅動存取,此解法比起抽掉 GNOME 監聽touchpad 功能後重從編譯好很多  如:這裡的方法

1. Install  elantech-asustouchpad-dkms 4.0.2-1


2.Install xf86-input-synaptics and, from AUR, touchegg and touchegg-gce-git


    touchegg-gce-git為touchegg 的GUI設定工具。

3.Turn off touchpad: settings ->Mouse & Touchpad ->touchpad Off





4.新增 /etc/X11/xorg.conf.d/50-synaptics.conf


Section "InputClass"
        Identifier "touchpad catchall"
        Driver "synaptics"
        MatchIsTouchpad "on"
        Option "TapButton1" "1"
        Option "TapButton2" "0"
        Option "TapButton3" "0"
        Option "ClickFinger2" "0"
        Option "ClickFinger3" "0"

# This option is recommend on all Linux systems using evdev, but cannot be
# enabled by default. See the following link for details:
# http://who-t.blogspot.com/2010/11/how-to-meta:ignore-configuration-errors.html
        MatchDevicePath "/dev/input/event*"


EndSection
可參考各型號 driver 設定雖然此葉面已經過有點時:
https://code.google.com/archive/p/touchegg/wikis/ConfigureDevices.wiki

參數設定: https://wiki.archlinux.org/index.php/Touchpad_Synaptics

5.要再進入桌面前啟動:

vi      /etc/profile.d/touchegg.sh


#!/bin/bash
touchegg  &  > /dev/null

6.reboot




Some of the content is thus to Multitouch gestures with the Dell XPS 13 on Arch Linux.
[註1]

xorg.conf.d/50-synaptics.conf does not seem to apply under GNOME and MATE

GNOME and MATE, by default, will overwrite various options for your touch-pad. This includes configurable features for which there is no graphical configuration within GNOME's system control panel. This may cause it to appear that /etc/X11/xorg.conf.d/50-synaptics.conf is not applied. Please refer to the GNOME section in this article to prevent this behavior.


可能有關係:


Console tools

  • Synclient (Recommended) — command line utility to configure and query Synaptics driver settings on a live system, the tool is developed by the synaptics driver maintainers and is provided with the synaptics driver
http://xorg.freedesktop.org/ || xf86-input-synaptics
  • xinput — small general-purpose CLI tool to configure devices
http://xorg.freedesktop.org/ || xorg-xinput

Graphical tools

  • GPointing Device Settings — provides graphical on the fly configuration for several pointing devices connected to the system, including your synaptics touch pad. This application replaces GSynaptics as the preferred tool for graphical touchpad configuration through the synaptics driver
https://wiki.gnome.org/Attic/GPointingDeviceSettings || gpointing-device-settings



參考:
Package Details: elantech-asustouchpad-dkms:https://aur.archlinux.org/packages/elantech-asustouchpad-dkms/
arch wiki Touchpad Synaptics:https://wiki.archlinux.org/index.php/Touchpad_Synaptics
arch wiki Touchegg:https://wiki.archlinux.org/index.php/Touchegg
Multitouch gestures with the Dell XPS 13 on Arch Linux:https://hroy.eu/tips/dell-xps-13-touchpad/

2016年4月11日 星期一

Ubuntu GNOME 14.04 touchpade setting miss

Ubuntu GNOME 14.04  touchpade  setting miss


安裝好Ubuntu GNOME 14.04後卻發現觸控版兩指以上操作都沒有辦法執行.
很像網路上ASUS  zenbook  安裝uubntu 14.04 kernel 3.19 以前都有可能這個問題,目前網路上已知就是UX303LN與UX303UB。
已經有人發布修正此問題在 Pilot6

sudo add-apt-repository ppa:hanipouspilot/focaltech-dkms
sudo apt-get update
sudo apt-get install focaltech-dkms


https://launchpad.net/~hanipouspilot/+archive/ubuntu/focaltech-dkms
http://ubuntuforums.org/showthread.php?t=2253069&page=10
http://askubuntu.com/questions/609228/asus-x750ja-and-ubuntu-gnome-14-04

2016年4月10日 星期日

ubuntu 14.04 NVIDA 驅動 安裝 (ASUS UX303UB) 解決 重開機黑屏問題

ubuntu 14.04 nvidia driver install


測試環境為 ASUS UX303UB GEFORCE 940M,當使用 ubuntu 內建驅動輔助安裝程式安裝nvidia 352 後無法登入或者黑屏,nvidia 官方會叫你下載364的.run驅動,安裝後也是黑屏會一直跳出intel i915 mismatch in base.adjusted_mode.crtc_clock 錯誤,查了一下ubuntu官網最後相容於 GT940M 並有發布的版本就是352,依照此文章內容就可以解決了,ubuntu GNOME 也適用。

Bumblebee 是一個將nvidia  Optimus 技術移植的開源專案,主要是在筆電上自動調整nvidia GPU 與內建intel GPU 的使用,達到省電的效果,所以也有解決雙顯卡衝突問題。
 


I don't own the rights to the source article.
No copyright infringement intended.


參考:http://askubuntu.com/questions/689724/bumblebee-on-asus-zenbook-ux303lb

2016年3月16日 星期三

mpstat 顯示 多 core 目前CPU使用率


這裡的CPU 使用率計算方法是 : 100% - idle %=目前CPU使用率

不管是top 或者是 mpstat 都是由 linux 的/proc/stat  檔案擷取,有興趣的朋友可以去google一下。

mpstat 使用前要安裝systat

sudo apt-get install sysstat

使用方法:
Usage: mpstat [ options ] [ <interval> [ <count> ] ]
Options are:
[ -A ] [ -u ] [ -V ] [ -I { SUM | CPU | SCPU | ALL } ]
[ -P { <cpu> [,...] | ON | ALL } ]


mpstat -P ALL 可以顯示所有的CPU 的使用率。

zino@ubuntu:~$ mpstat -P ALL

Linux 3.19.0-25-generic (ubuntu)        03/16/2016      _x86_64_        (2 CPU)
04:50:51 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idl
04:50:51 PM  all    0.04    0.00    0.28    0.05    0.00    0.00    0.00    0.00    0.00   99.64
04:50:51 PM    0    0.04    0.00    0.28    0.04    0.00    0.00    0.00    0.00    0.00   99.64
04:50:51 PM    1    0.03    0.00    0.27    0.06    0.00    0.00    0.00    0.00    0.00   99.63

可選擇 每秒 顯示幾秒後計算

zino@ubuntu:~$ mpstat -P ALL 1 3

Linux 3.19.0-25-generic (ubuntu)        03/16/2016      _x86_64_        (2 CPU)

05:26:34 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
05:26:35 PM  all    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
05:26:35 PM    0    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
05:26:35 PM    1    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00

05:26:35 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
05:26:36 PM  all    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
05:26:36 PM    0    0.00    0.00    1.00    0.00    0.00    0.00    0.00    0.00    0.00   99.00
05:26:36 PM    1    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00

05:26:36 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
05:26:37 PM  all    0.00    0.00    0.50    0.00    0.00    0.00    0.00    0.00    0.00   99.50
05:26:37 PM    0    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
05:26:37 PM    1    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00


Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
Average:     all    0.00    0.00    0.17    0.00    0.00    0.00    0.00    0.00    0.00   99.83
Average:       0    0.00    0.00    0.33    0.00    0.00    0.00    0.00    0.00    0.00   99.67
Average:       1    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00


mpstat -P ALL | grep -v 'CPU\|all\|^$' |awk '{print $1" "$2 " CPU "$3" "100-$13"%"}'


zino@ubuntu:~$ mpstat -P ALL | grep -v 'CPU\|all\|^$' |awk '{print $1" "$2 " CPU "$3" "100-$13"%"}'
04:34:56 PM CPU 0 0.39%
04:34:56 PM CPU 1 0.37%


grep 空白行方法:https://blog.longwin.com.tw/2012/09/grep-filter-null-line-2012/

2016年3月4日 星期五

使用ssh傳送 bash 指令

使用ssh傳送  bash 指令

傳送命令很簡單,只要在ssh後方加上單引號包起指令就可以,要注意的是只有在bash內的指令才可以,如果引用路徑是寫在.bashrc的則無法呼叫。

格式:

ssh user@host   'commend'

範例:

ssh  uesr@192.168.1.1  'ls -al'

單引號內是可以串接指令的用 ';' 就可以了。

例如: 

ssh  uesr@192.168.1.1  'netstat;grep 123'