1
votes

I'm having problem accessing one of my VMs (called myvm1 here) after having restored a disk from a snapshot. Here is what I did yesterday (which worked just fine):

  • I made a snapshot of disk1.
  • I created a new disk, called disk2, using the snapshot created above.
  • I attached the disk to myvm1 though Google Console.
  • I unmounted disk1 and mounted disk2.
  • I deleted disk1.

Everything worked fine, and database data on disk2 was accessible as desired. There's not much else on that disk.

Today, what I wanted to do, was to "rename" disk2 to disk1 (to avoid future problems with our Terraform setups). I did this by doing the following:

  • I made a snapshot of disk2.
  • I created a new disk, calling it disk1, using the snapshot above.
  • I attached the disk to myvm1 using the terminal: gcloud compute --project=myproject instances attach-disk myvm1 --disk disk1

After this, when I attempted to ssh into myvm1 (to unmount and mount), I get a

ssh: connect to host myvm1 port 22: Connection refused

I have attempted the following to solve this/investigate:

  • stopping and starting the VM (takes a considerably longer time than other VMs in the same project) repeatedly
  • detaching disk1 (and re-attaching it)

Other information:

  • other VMs in the same project are still accessible via ssh.
  • I did nothing else to the VM yesterday or today but what I have written above. The system has not been in use between yesterday and now (it was shut down over night to save money).
  • Using the Google Console SSH does not work, BUT it does not work for the other VMs either, as we connect using private keys.
  • "The instance is booting up and sshd is not yet running." - It's listed as RUNNING.
  • "The instance is not running sshd." I have not manually disabled sshd.
  • "sshd is listening on a port other than the one you are connecting to." I've made no changes to ports.
  • "There is no firewall rule allowing SSH access on the port." Also, under "Firewall rules and routes details" port 22 is enabled. Also, firewall rules are identical to the other VMs in the same project.
  • "The firewall rule allowing SSH access is enabled, but is not configured to allow connections from GCP Console services." We don't want to be able to connect via GCP Console so that doesn't matter.
  • "The instance is shut down." - It's running.

Debug information for the ssh-call:

me@mycomputer:~/project$ ssh myvm1 -vvv
OpenSSH_7.2p2 Ubuntu-4ubuntu2.4, OpenSSL 1.0.2g  1 Mar 2016
debug1: Reading configuration data /home/me/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: resolving "myvm1" port 22
debug2: ssh_connect_direct: needpriv 0
debug1: Connecting to myvm1 [10.23.0.3] port 22.
debug1: connect to address 10.23.0.3 port 22: Connection refused
ssh: connect to host myvm1 port 22: Connection refused

I've looked at the solution mentioned here Why Google Cloud Compute Engine instance gives ssh connection refused after restart? but since I have not yet mounted/unmounted any of the disks I don't see how that could be the same problem.

I would very much appreciate any help you can give me. Solutions involving creating a new instance are not relevant, as I want to know what went wrong in the first place, so that this does not happen in a production environment. Thankfully myvm1 is just a sandbox system.

2

2 Answers

0
votes

A port 22 error can come from two sources: firewall not properly set up on GCP or port 22 not accepting SSH connections from within your instance. Assuming that firewall is properly set up since it works on other instances, please try to log in with serial console and check your iptable.

In order to connect to serial console you will have to perform the following:

1). Activate the “Connect to serial console” button.

Go to VM instances, click on your VM, Edit, and active “enable connecting to serial ports” in the Remote access area and click on save.

2). Create a username and password.

Go to Vm instance, click on your Vm again, Edit, and fill up the custom metadata section with:

In key: startup-script

In value:

#!/bin/bash 
sudo useradd -G sudo pamela
sudo echo 'pamela:pamela5' | chpasswd

(This is a script that creates a username : pamela and password: pamela5, which you are going to use later. Please use something else for security purposes)

3). A reboot is needed for changes to take effect.

0
votes

I had the same problem. I think the snapshot file is corrupted.