Release 422 (13 Jul 2023)#
General#
Security#
BigQuery connector#
- Add support for writing to columns with a - timestamp(p) with time zonetype. (#17793)
Delta Lake connector#
- Add support for renaming columns. (#15821) 
- Improve performance of reading from tables with a large number of checkpoints. (#17405) 
- Disallow using the - vacuumprocedure when the max writer version is above 5. (#18095)
Hive connector#
- Add support for reading the - timestamp with local time zoneHive type. (#1240)
- Add a native Avro file format writer. This can be disabled with the - avro.native-writer.enabledconfiguration property or the- avro_native_writer_enabledsession property. (#18064)
- Fix query failure when the - hive.recursive-directoriesconfiguration property is set to true and partition names contain non-alphanumeric characters. (#18167)
- Fix incorrect results when reading text and - RCTEXTfiles with a value that contains the character that separates fields. (#18215)
- Fix incorrect results when reading concatenated - GZIPcompressed text files. (#18223)
- Fix incorrect results when reading large text and sequence files with a single header row. (#18255) 
- Fix incorrect reporting of bytes read for compressed text files. (#1828) 
Iceberg connector#
- Add support for adding nested fields with an - ADD COLUMNstatement. (#16248)
- Add support for the - register_tableprocedure to register Hadoop tables. (#16363)
- Change the default file format to Parquet. The - iceberg.file-formatcatalog configuration property can be used to specify a different default file format. (#18170)
- Improve performance of reading - rowtypes from Parquet files. (#17387)
- Fix failure when writing to tables sorted on - UUIDor- TIMEtypes. (#18136)
Kudu connector#
- Add support for table comments when creating tables. (#17945) 
Redshift connector#
- Prevent returning incorrect results by throwing an error when encountering unsupported types. Previously, the query would fall back to the legacy type mapping. (#18209)