Skip to content

Fail when PDF contains duplicate objects

Hi,

I've came across an issue where processing of a PDF takes a lot of time. It happens because processing is slowly approaching the recursionLimit in Parser.cc. I've found that there are some objects duplicated in the PDF. Detecting these duplicates and failing because of them solves the issue. I can not share the PDF but this is almost the same issue as the one reported here: https://bugs.freedesktop.org/show_bug.cgi?id=96217 .

To be specific, the commit in this merge request checks whether current xrefEntry is already occupied during XRef::constructXRef(). If yes then it checks offset of the entry against current position and fail if they are different.

Merge request reports