The task is to write using a large number of data streams in Oracle. For this, each thread declared
type TinfoTask = class (Tobject) pQuery: Pointer; nThread: integer; end; var TaskRunQuery: ITask; //Task для выполнения запроса const quTimeOut: integer = 30000; //через сколько срубать выполнение запроса In the flow constructor:
oraSession := TOracleSession.Create(nil); oraSession.LogonUsername := AUser; oraSession.LogonPassword := APasswd; oraSession.LogonDatabase := AConnectionString; oraSession.ThreadSafe := True; oraSession.LogOn; OracleQuery:= TOracleQuery.create(nil); OracleQuery.Session:=oraSession; OracleQuery.DeleteVariables; OracleQuery.sql.Text:='хранимая процедура' OracleQuery.DeclareVariable('param1', otInteger); OracleQuery.DeclareVariable('param2', otDate); OracleQuery.DimPLSQLTable('param2', size_, 0); OracleQuery.DeclareVariable('param3', otDate); OracleQuery.DimPLSQLTable('param3', size_, 0); OracleQuery.DeclareVariable('param4', otString); OracleQuery.DimPLSQLTable('param4', size_, 10); infoTask:= TinfoTask.Create; infoTask.pQuery:= @ods; infoTask.nThread:= Fnum Procedure declared
procedure RunQuery(Sender: TObject); var num: integer; begin TaskError:= false; if Sender <> nil then begin num:= TinfoTask(Sender).nThread; try TOracleQuery(TinfoTask(Sender).pQuery^).Execute; except on E: exception do begin WriteLog('(!ОШИБКА)' + #13#10 + e.Message); TaskError:= true; end; end; end; end; In the stream itself, the request is executed via TTask, so that you can cancel the execution of the recording if the timeout is exceeded
OracleQuery.close; OracleQuery.SetVariable('param1', cnt); OracleQuery.SetVariable('param2', varP2); OracleQuery.SetVariable('param3', varP3); OracleQuery.SetVariable('param4', varP4); TaskRunQuery := TTask.Create(Tobject(infoTask), RunQuery); TaskRunQuery.Start; if TaskRunQuery.Wait(quTimeOut) then begin TaskInTime:= True; end else begin TaskInTime:= False; WriteLog(' Превышено время ожидания запроса (30 сек.)'); oraSession.BreakExecution; end; Everything works fine, with a small load. As we exit 100,000 records per second, everything starts to slow down, memory grows, etc. etc. Maybe I did not take into account what? (There is no access to the server itself)