Release 348 (14 Dec 2020)#
General#
- Add support for - DISTINCTclause in aggregations within correlated subqueries. (#5904)
- Support - SHOW STATSfor arbitrary queries. (#3109)
- Improve query performance by reducing worker to worker communication overhead. (#6126) 
- Improve performance of - ORDER BY ... LIMITqueries. (#6072)
- Reduce memory pressure and improve performance of queries involving joins. (#6176) 
- Fix - EXPLAIN ANALYZEfor certain queries that contain broadcast join. (#6115)
- Fix planning failures for queries that contain outer joins and aggregations using - FILTER (WHERE <condition>)syntax. (#6141)
- Fix incorrect results when correlated subquery in join contains aggregation functions such as - array_aggor- checksum. (#6145)
- Fix incorrect query results when using - timestamp with time zoneconstants with precision higher than 3 describing same point in time but in different zones. (#6318)
- Fix duplicate query completion events if query fails early. (#6103) 
- Fix query failure when views are accessed and current session does not specify default schema and catalog. (#6294) 
Web UI#
JDBC driver#
- Allow reading - timestamp with time zonevalue as- ZonedDateTimeusing- ResultSet.getObject(int column, Class<?> type)method. (#307)
- Accept - java.time.LocalDatein- PreparedStatement.setObject(int, Object). (#6301)
- Extend - PreparedStatement.setObject(int, Object, int)to allow setting- timeand- timestampvalues with precision higher than nanoseconds. (#6300) This can be done via providing a- Stringvalue representing a valid SQL literal.
- Change representation of a - rowvalue.- ResultSet.getObjectnow returns an instance of- io.prestosql.jdbc.Rowclass, which better represents the returned value. Previously a- rowvalue was represented as a- Mapinstance, with unnamed fields being named like- field0,- field1, etc. You can access the previous behavior by invoking- getObject(column, Map.class)on the- ResultSetobject. (#4588)
- Represent - varbinaryvalue using hex string representation in- ResultSet.getString. Previously the return value was useless, similar to- "B@2de82bf8". (#6247)
- Report precision of the - time(p),- time(p) with time zone,- timestamp(p)and- timestamp(p) with time zonein the- DECIMAL_DIGITScolumn in the result set returned from- DatabaseMetaData#getColumns. (#6307)
- Fix the value of the - DATA_TYPEcolumn for- time(p)and- time(p) with time zonein the result set returned from- DatabaseMetaData#getColumns. (#6307)
- Fix failure when reading a - timestampor- timestamp with time zonevalue with seconds fraction greater than or equal to 999999999500 picoseconds. (#6147)
- Fix failure when reading a - timevalue with seconds fraction greater than or equal to 999999999500 picoseconds. (#6204)
- Fix element representation in arrays returned from - ResultSet.getArray, making it consistent with- ResultSet.getObject. Previously the elements were represented using internal client representation (e.g.- String). (#6048)
- Fix - ResultSetMetaData.getColumnTypefor- timestamp with time zone. Previously the type was miscategorized as- java.sql.Types.TIMESTAMP. (#6251)
- Fix - ResultSetMetaData.getColumnTypefor- time with time zone. Previously the type was miscategorized as- java.sql.Types.TIME. (#6251)
- Fix failure when an instance of - SphericalGeographygeospatial type is returned in the- ResultSet. (#6240)
CLI#
Hive connector#
- Allow configuring S3 endpoint in security mapping. (#3869) 
- Add support for S3 streaming uploads. Data is uploaded to S3 as it is written, rather than staged to a local temporary file. This feature is disabled by default, and can be enabled using the - hive.s3.streaming.enabledconfiguration property. (#3712, #6201)
- Reduce load on metastore when background cache refresh is enabled. (#6101, #6156) 
- Verify that data is in the correct bucket file when reading bucketed tables. This is enabled by default, as incorrect bucketing can cause incorrect query results, but can be disabled using the - hive.validate-bucketingconfiguration property or the- validate_bucketingsession property. (#6012)
- Allow fallback to legacy Hive view translation logic via - hive.legacy-hive-view-translationconfig property or- legacy_hive_view_translationsession property. (#6195)
- Add deserializer class name to split information exposed to the event listener. (#6006) 
- Improve performance when querying tables that contain symlinks. (#6158, #6213) 
Iceberg connector#
Kafka connector#
- Allow writing - timestamp with time zonevalues into columns using- milliseconds-since-epochor- seconds-since-epochJSON encoders. (#6074)