I am using the following Cloudformation resources to create and attach a volume to an EC2 instance:
  VOLData1:
    Type: AWS::EC2::Volume
    DeletionPolicy: "Snapshot"
    Properties:
      AvailabilityZone: !GetAtt EC2ESDataNode1.AvailabilityZone
      Iops: 5000
      Size: 100
      VolumeType: "io1"
      Tags:
        - Key: "Name"
          Value: "es-data-1"
  VOLATTCHData1:
    Type: AWS::EC2::VolumeAttachment
    Properties:
      Device: "/dev/sdd"
      InstanceId: !Ref EC2ESDataNode1
      VolumeId: !Ref VOLData1
However, when I ssh into the instance:
pkara@ip-10-11-12-99:~$ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0         7:0    0   18M  1 loop /snap/amazon-ssm-agent/930
loop1         7:1    0 88.2M  1 loop /snap/core/5897
nvme0n1     259:0    0    8G  0 disk 
└─nvme0n1p1 259:1    0    8G  0 part /
nvme1n1     259:2    0  100G  0 disk 
pkara@ip-10-11-12-99:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             31G     0   31G   0% /dev
tmpfs           6.2G  776K  6.2G   1% /run
/dev/nvme0n1p1  7.7G  3.1G  4.7G  40% /
tmpfs            31G     0   31G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            31G     0   31G   0% /sys/fs/cgroup
/dev/loop0       18M   18M     0 100% /snap/amazon-ssm-agent/930
/dev/loop1       89M   89M     0 100% /snap/core/5897
tmpfs           6.2G     0  6.2G   0% /run/user/1001
Should I provision myself partition creation and the mounting of the new filesystem? If so, what is the recommended way to go about it? (so that the mount point is not lost on each reboot)