I am trying to use AWS Database Migration Service (DMS) to populate a SQL Server 2014 table from S3. I have the following S3 schema:
{
"TableCount": "1",
"Tables": [
{
"TableName": "employee",
"TablePath": "public/employee/",
"TableOwner": "",
"TableColumns": [
{
"ColumnName": "Id",
"ColumnType": "INT8",
"ColumnNullable": "false",
"ColumnIsPk": "true"
},
{
"ColumnName": "HireDate",
"ColumnType": "TIMESTAMP"
},
{
"ColumnName": "Name",
"ColumnType": "STRING",
"ColumnLength": "20"
}
],
"TableColumnsTotal": "3"
}
]
}
When I run the migration task, I get the below overflow error because SQL Server will not allow the value 2018-04-11 08:02:16.788027 from S3 to be inserted into a SQL Server DATETIME column.
My question is, is there a way that I can tell AWS DMS to create TIMESTAMP S3 data as DATETIME2 columns in SQL Server? Note: Each time the migration runs, the table is dropped and recreated. I can workaround this by manually creating the table myself in SQL-Server with HireDate as DATETIME2, then setting the DMS migration 'target table preparation mode' to TRUNCATE rather than drop/create but this is not ideal for my current solution.
[TARGET_LOAD ]E: Failed to execute statement: 'INSERT INTO [public].[employee]([Id],[HireDate],[Name]) values (?,?,?)' [1022502] (ar_odbc_stmt.c:2456)
[TARGET_LOAD ]E: RetCode: SQL_ERROR SqlState: 22008 NativeError: 0 Message: [Microsoft][ODBC Driver 13 for SQL Server]Datetime field overflow. Fractional second precision exceeds the scale specified in the parameter binding. Line: 1 Column: 4 [1022502] (ar_odbc_stmt.c:2462)
[TARGET_LOAD ]E: Invalid input for column 'HireDate' of table 'public'.'employee' in line number 1.(sqlserver_endpoint_imp.c:2357)