You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I use Order by or Aggregations operations. I see that the spill is activated. as shown in the following image.
When I use the join operator, it does not successfully spill as before, but instead returns the following error.
io.trino.ExceededMemoryLimitException: Query exceeded per-node memory limit of 100MB [Allocated: 99.37MB, Delta: 956.92kB, Top Consumers: {HashBuilderOperator=98.16MB, LazyOutputBuffer=1.20MB, PagePartitioner=3.05kB}] at io.trino.ExceededMemoryLimitException.exceededLocalUserMemoryLimit(ExceededMemoryLimitException.java:40) at io.trino.memory.QueryContext.enforceUserMemoryLimit(QueryContext.java:330) at io.trino.memory.QueryContext.updateUserMemory(QueryContext.java:165) at io.trino.memory.QueryContext.lambda$addTaskContext$0(QueryContext.java:250) at io.trino.memory.QueryContext$QueryMemoryReservationHandler.reserveMemory(QueryContext.java:311) at io.trino.memory.context.RootAggregatedMemoryContext.updateBytes(RootAggregatedMemoryContext.java:37) at io.trino.memory.context.ChildAggregatedMemoryContext.updateBytes(ChildAggregatedMemoryContext.java:38)
My sql like this: select * from <table1> t1 inner join <table2> t2 on t1.column1 = t2.column1 and t1.column2 = t2.column2;
Stage Performance:
I wonder know what caused the join operation unable to spill. Is there any parameter or method to resolve.
The text was updated successfully, but these errors were encountered:
In Trino, you can enable join spilling by setting the following session properties:
SET SESSION join_distribution_type = 'PARTITIONED';
join-distribution-type = 'BROADCAST' or 'PARTITIONED'.
In a broadcast join, one of the tables (usually the smaller one) is broadcast to all the nodes that hold parts of the larger table. Each node then performs a local join of the broadcast table with the part of the larger table it has. This operation is performed entirely in memory.
Spilling is a mechanism that comes into play when there's not enough memory to hold intermediate results of a computation. In the case of a broadcast join, the entire smaller table must fit into memory. If it doesn't, Trino doesn't perform the join and throws an out of memory exception.
The reason for this is efficiency. The idea behind a broadcast join is to save on the data shuffling across the network. If the smaller table has to be spilled to disk, it defeats the purpose of the broadcast join as disk I/O operations are generally slower than in-memory operations.
I referred to the official Trino document
my configuration is as follows:
When I use
Order by
orAggregations
operations. I see that the spill is activated. as shown in the following image.When I use the join operator, it does not successfully spill as before, but instead returns the following error.
io.trino.ExceededMemoryLimitException: Query exceeded per-node memory limit of 100MB [Allocated: 99.37MB, Delta: 956.92kB, Top Consumers: {HashBuilderOperator=98.16MB, LazyOutputBuffer=1.20MB, PagePartitioner=3.05kB}] at io.trino.ExceededMemoryLimitException.exceededLocalUserMemoryLimit(ExceededMemoryLimitException.java:40) at io.trino.memory.QueryContext.enforceUserMemoryLimit(QueryContext.java:330) at io.trino.memory.QueryContext.updateUserMemory(QueryContext.java:165) at io.trino.memory.QueryContext.lambda$addTaskContext$0(QueryContext.java:250) at io.trino.memory.QueryContext$QueryMemoryReservationHandler.reserveMemory(QueryContext.java:311) at io.trino.memory.context.RootAggregatedMemoryContext.updateBytes(RootAggregatedMemoryContext.java:37) at io.trino.memory.context.ChildAggregatedMemoryContext.updateBytes(ChildAggregatedMemoryContext.java:38)
My sql like this:
select * from <table1> t1 inner join <table2> t2 on t1.column1 = t2.column1 and t1.column2 = t2.column2;
Stage Performance:
I wonder know what caused the join operation unable to spill. Is there any parameter or method to resolve.
The text was updated successfully, but these errors were encountered: