I want to extract a certain part from the site and make a pdf document out of this piece. I create a document for example from the official website iText:

PdfPTable table = new PdfPTable(1); PdfPCell cell = new PdfPCell(); ElementList allElements = XMLWorkerHelper.parseToElementList(html, null); for (Element element : allElements) { cell.addElement(element); } table.addCell(cell); document.add(table); document.close(); 

The document is created, but Russian words are not displayed. Partially managed to solve this way:

  BaseFont baseFont = BaseFont.createFont(FONT_LOCATION, ENCODING, BaseFont.EMBEDDED); Paragraph paragraph = new Paragraph(title, new Font(baseFont, 18)); 

But this is for a single element, and the XMLWorkerHelper.parseToElementList() method returns the already created list.

  • Similar question on enSO - stackoverflow.com/questions/21254628/… - Vadim Prokopchuk
  • The question is similar only to the subject. The author of the question creates the elements myself, and I get them from the function. - Babayka

2 answers 2

 for (Element element : allElements) { for (Chunk c : e.getChunks()) { c.setFont(someFont); } cell.addElement(element); } 

    Remember that XMLWorker is deprecated. You must use pdfHTML to convert HTML to PDF. pdfHTML supports HTML5 and CSS3 . And many improvements have been made to the plan and table algorithms.

    Take a look at https://itextpp.com/itext7/pdfHTML