Scenario: I wanna control / access my comp1 (which runs Linux) from my comp2 (which runs MacOS).
Reason: I cannot work with Data Science's stuff with comp2 (Mac M1) whereas comp1 has everything (Linux, GPU,...)
Two computers must be connected to the same network!
I use NoMachine (I think it's faster and more controllable than TeamViewer).
It requires username and password and you are connected using your Microsoft account, don't user the username and password being set up in the System Preferences, use your Microsoft Account credentials!
👉 I learned from this answer.
❇️ On the "server computer" (comp1 -- Linux)
1# Knowing its name 2hostname 3# or `hostnamectl` or `cat /proc/sys/kernel/hostname` 4# mine: pop-os 5 6# Knowing current user 7whoami 8# mine: thi 9# You must know the password!!! 10 11# Install openssh-server 12sudo apt update 13sudo apt install openssh-server 14 15# Check comp1's ip 16ifconfig | grep "192.168" 17# mine: 192.168.1.115
Test: connect from comp1 to comp1 itself!
1ssh 127.0.0.1 2# type user1's password
❇️ On the "client computer" (comp2 -- MacOS)
❇️ Copy files
Tip: You can use a smtp client (eg: CyberDuck) to make things visually
1# server 2pop-os.local # or using ip address 3# port 422 5# username 6thi 7# password
Suppose that there is a jupyter lab server which is running on comp1 (In my case, it's running inside a docker container which is ported to comp1 via port
1# On comp2 2ssh -N -L localhost:8888:127.0.0.1:8888 [email protected] 3# Remark: keep the terminal
Then open http://localhost:8888/lab to see the result!
I wanna ssh to the container which is running on comp1 from comp2.
❇️ Suppose that the running container on comp1 is created from an image which hasn't set up the open-ssh by default. We will set up a server in the running container
1# Check the name of running container 2docker ps # mine: docker_ai 3 4# Go inside the running container 5docker exec -it docker_ai bash 6 7# [in the container] 8 9# Install ssh server 10apt update && apt install openssh-server && apt install nano 11# Change `root`'s password 12passwd # suppose: qwerty 13 14nano /etc/ssh/sshd_config 15# and add 16Port 2222 17PermitRootLogin yes 18 19# Restart ssh server 20/etc/init.d/ssh start
1# expose the ports 2ports: 3 - "6789:2222"
1# Test on comp1 2ssh -p 6789 root@localhost 3# enter "qwerty" password for "root" 4 5# Connect from comp2 6ssh -p 6789 [email protected] 7# enter "qwerty" password for "root"
❇️ In case your image has already installed
openssh-serverbut forgot to run it by default. We will run the ssh server on port
22for the running container.
Add below line to
Dockerfileif you wanna run the openssh-server by default
1CMD $(which sshd) -Ddp 22
We shouldn't (cannot??) run 2 servers in parallel in the docker image (for example, one for jupyter notebook on port
8888and one for
💡 In this case, you should keep the jupyter notebook running. Each time you wanna run the
openssh-server, you can run
1docker exec docker_ai $(which sshd) -Ddp 22 # and keep this tab open 2# or 3docker exec -d .... # detach mode
You can also do this completely from comp2
1ssh [email protected] 2# Then you are in comp1's terminal 3docker exec ....
Important remark: If you enter the container's shell and then you wanna exit with
logoutcommand. It also terminates the server and you have to run the server again!
Don't forget to forward the port
22(in container) to
6789in comp1 via
1# On comp1 2docker exec <container_name> $(which sshd) -Ddp 22 3# Keep this tab open and running
1# On comp2 2ssh -p 6789 [email protected] 3# enter pwd: "qwerty" as in the Dockerfile