There is a table from the beginning of the year on 38 million records. indices are, but inzerts for 2 seconds - not good. I decided to make a partition. on a test server, the sample without indices increased 60 times. like a bloom. but it poured into the test base of 10 hours.
Question 1 - if I just hang the function of patricating on a live base, without tampering with the previous data - how will this affect the speed?
Question 2 - if the data is assumed to be stored three months ago not as minutes, but average per hour - how can this be done correctly?
UPD
No, I will have old data. they will simply lie in the master. but new ones will be written in the partition. and about the function (most likely the trigger) - how do I run it in time? I think more about the python script, which once a month will dump the old partitions, clean the table and record the aggregated data. but in the 7 million lines partition. will the python pull? It would probably be correct for the demon \ script to write for slow processing of the patrician. for example, once every 30 minutes I would take data from a partition (for example, in one day), process it, delete old data, fill in new aggregated ones. But about the flag, the data is processed or not well invented. but you can jump from the interval.
3 question - I use the data to draw statistics. That is, long straight lines with the same value can be constructed after 2 values. Is there a way to "bite out" a long stream of identical data