Segue trecho do livro “Hibernate in Action”, pág 205:
“Managing the first-level cache
Consider this frequently asked question: “I get an OutOfMemoryException when I try
to load 100,000 objects and manipulate all of them. How can I do mass updates
with Hibernate?”
It’s our view that ORM isn’t suitable for mass update (or mass delete) operations.
If you have a use case like this, a different strategy is almost always better: call a
stored procedure in the database or use direct SQL UPDATE and DELETE statements.
Don’t transfer all the data to main memory for a simple operation if it can be performed
more efficiently by the database. If your application is mostly mass operation
use cases, ORM isn’t the right tool for the job!
If you insist on using Hibernate even for mass operations, you can immediately
evict() each object after it has been processed (while iterating through a query
result), and thus prevent memory exhaustion.
To completely evict all objects from the session cache, call Session.clear(). We
aren’t trying to convince you that evicting objects from the first-level cache is a bad
thing in general, but that good use cases are rare. Sometimes, using projection and
182 CHAPTER 5
Transactions, concurrency, and caching
a report query, as discussed in chapter 7, section 7.4.5, “Improving performance
with report queries,” might be a better solution.
Note that eviction, like save or delete operations, can be automatically applied
to associated objects. Hibernate will evict associated instances from the Session
if the mapping attribute cascade is set to all or all-delete-orphan for a particular
association.
When a first-level cache miss occurs, Hibernate tries again with the second-level
cache if it’s enabled for a particular class or association.”
Por outro lado vc poderia usar o “second-level cache”, mais aparopriado para dados q são estáveis:
“The Hibernate second-level cache
The Hibernate second-level cache has process or cluster scope; all sessions share
the same second-level cache. The second-level cache actually has the scope of a
SessionFactory.
Persistent instances are stored in the second-level cache in a disassembled form.
Think of disassembly as a process a bit like serialization (the algorithm is much,
much faster than Java serialization, however).
The internal implementation of this process/cluster scope cache isn’t of much
interest; more important is the correct usage of the cache policies—that is, caching
strategies and physical cache providers.
Different kinds of data require different cache policies: the ratio of reads to
writes varies, the size of the database tables varies, and some tables are shared with
other external applications. So the second-level cache is configurable at the
granularity of an individual class or collection role. This lets you, for example,
enable the second-level cache for reference data classes and disable it for classes
that represent financial records. The cache policy involves setting the following:
■ Whether the second-level cache is enabled
■ The Hibernate concurrency strategy
■ The cache expiration policies (such as timeout, LRU, memory-sensitive)
■ The physical format of the cache (memory, indexed files, cluster-replicated)
Not all classes benefit from caching, so it’s extremely important to be able to disable
the second-level cache. To repeat, the cache is usually useful only for readmostly
classes. If you have data that is updated more often than it’s read, don’t
enable the second-level cache, even if all other conditions for caching are true!
Furthermore, the second-level cache can be dangerous in systems that share the
database with other writing applications. As we explained in earlier sections, you
must exercise careful judgment here.
The Hibernate second-level cache is set up in two steps. First, you have to decide
which concurrency strategy to use. After that, you configure cache expiration and
physical cache attributes using the cache provider.”
Espero q isso ajude e que vc conclua postando a solução q acabou encontrando.
Abraço.