When saving datasets, use also structural type information to set column types

So currently when we are saving a dataset we only check if there is a semantic type describing column type and we map it to D3M one. And if there is none, we set UnknownType. This in fact looses information. Because if something is already parsed and has structural type int, we know it is int, we just do not set that in semantic type. So the reason why we have both semantic types and structural types is that we can say something like "this is an integer represented as a string". And you can parse it. But if something is already an integer represented as an integer, then we can use that information when saving (and converting to a string) to record that this was an integer, even if it is now represented as a string.

There is a limited set of structural types we allow (see types.py) so we can make a map how those structural types map to column type semantic types.

See test_d3m_saver_synthetic_dataset test. In that test value is of a column is integer, generated metadata picks that up and set structural_type but when saved it is saved as UnknownType. It is OK when it is being loaded that structural type is str, because we did convert it to string when saving. But there should be semantic type set to Integer so that we know what to parse it back.

So in a way, if we have a synthetic dataset with normal Python values (so not just string structural types, but others), when we save it, it should be saved so that when we load it and pass it through ColumnParser common primitive we get back to the original dataset. Currently we loose information.