There is a base on Oracle 10g, its full size is 1.3 TB, while at least 1 TB — pdf documents are pictures that do not carry a semantic load. How can I make a partial copy, while I know exactly from which tables I want to exclude data?

  • Remove the dump with the list of tables that you need. For example, `exp user / pass file = / mypath / my.dmp tables = user.TABLE_A, user.TABLE_B - Chubatiy
  • I will clarify - I'm not a magician, I'm just learning. There is a huge pile of tables, and only 2 pieces are unnecessary. What will give me a dump? - duber.fm
  • 2
    Dump will remove your copy of the database completely. Then it will be possible to fill it with the same utility on another database. It is possible to exclude tables. In fact, it will be your partial copy. Read the information on the exp, imp. As well as Data Pump Import and Data Pump Export - Chubatiy
  • Stumbled upon another question. I don’t have enough free memory to make a full dump. And after the council with my colleagues, I found out that it is impossible not to transfer tables at all, you need to create them but without data. - duber.fm
  • 2
    No problem. Wash the rows = n parameter. those. exp user/pass file=/mypath/my.dmp tables=user.TABLE_A,user.TABLE_B rows=n . there will be only a structure without data - Chubatiy

3 answers 3

 RMAN: SKIP [FOREVER] TABLESPACE tablespace_name 

http://docs.oracle.com/cd/E11882_01/backup.112/e10643/rcmsynta2008.htm#RCMRF149

  • As I understand it, this is appropriate if the data of these tables were in a separate TABLESPACE. And if it has other data that is important to me? - duber.fm

Can be done via remap_data. In the example, the scheme is sh1 and I am limited to one table. For a full dump, replace the tables with full = Y.

 create table bigdata (key number, value varchar2(32), media blob); / -- добавил тестовую запись select key, value, lengthb(media) mediasize from bigdata; KEY VALUE MEDIASIZE ---------- -------------------------------- ---------- 1 myvalue 2048000 $ expdp sh1/sh1 directory=DATA_PUMP_DIR dumpfile=emptyblob.dmp tables=sh1.bigdata . . exported "SH1"."BIGDATA" 1.958 MB 1 rows -- много create or replace package remap as function rmblob(nul blob) return blob; end; / create or replace package body remap as function rmblob(nul blob) return blob is begin return empty_blob(); end; end; / $ expdp sh1/sh1 directory=DATA_PUMP_DIR dumpfile=emptyblob.dmp tables=sh1.bigdata \ remap_data=sh1.bigdata.media:sh1.remap.rmblob . . exported "SH1"."BIGDATA" 5.820 KB 1 rows drop table bigdata; $ impdp sh1/sh1 directory=DATA_PUMP_DIR dumpfile=emptyblob.dmp tables=sh1.bigdata . . imported "SH1"."BIGDATA" 5.820 KB 1 rows KEY VALUE MEDIASIZE ---------- -------------------------------- ---------- 1 myvalue 0 

    It was decided as follows - they made a full copy, the blobs of the documents were killed, the rest was transferred to another tablespace and the dummies were cleaned. Not the option that I wanted, but did not find another.