PDFtools and PDF forms on a Linux box. Just install a Linux box just to run this stuff. It is available in other environments but a bit more crashier anywhere except for Linux. Well worth the couple hundred bucks. And I don't blame the software, I blame the no definition reverse engineering try to figure out how to pull this stuff apart and put it back together.
I used to take the print streams from various applications in postscript. I would take a 500-page document that is supposed to be split apart, joined back together, printed and distributed to hundreds of people, only certain people got certain pages as well as the headers, and turned it into about 20,000 pages in hundreds of reports.
If you have any ability to create the print stream embed hidden tags for each page.
I would do the reverse and take PDF documents, convert them to postscript, rip them apart and put them back together. I never claimed to understand the actual post script. Stack-based reverse polish notation language is not for me. But there were enough identifiable pieces to be able to isolate individual pages, figure out their key tags, and put additional data onto the page such as page number and who it is going to.
There is always a crash point of too much or too complex data. I would usually fix it by simply reducing the batch size since it was quantity dependent typically.
For many thousand page production this can take quite a while but it is simply scripting a bit and grinding through all the data
I used to take the print streams from various applications in postscript. I would take a 500-page document that is supposed to be split apart, joined back together, printed and distributed to hundreds of people, only certain people got certain pages as well as the headers, and turned it into about 20,000 pages in hundreds of reports.
If you have any ability to create the print stream embed hidden tags for each page.
I would do the reverse and take PDF documents, convert them to postscript, rip them apart and put them back together. I never claimed to understand the actual post script. Stack-based reverse polish notation language is not for me. But there were enough identifiable pieces to be able to isolate individual pages, figure out their key tags, and put additional data onto the page such as page number and who it is going to.
There is always a crash point of too much or too complex data. I would usually fix it by simply reducing the batch size since it was quantity dependent typically.
For many thousand page production this can take quite a while but it is simply scripting a bit and grinding through all the data