Tell me, please, there is an Access-Log table for users, in the form "User ID", "IP", "Date". Is it possible to find all duplicate multi-accounts in it without a huge load on mySQL requests and a heavy load on the server? I can not find the optimal algorithm, all that I think up is long calculated or creates a huge burden on MySQL.

Example (call logs):
"10" "155.166.11.2" "2018-01-22 13:08:36"
"122" "127.0.0.1" "2018-01-22 13:19:00"
"13" "144.11.11.4" "2018-01-31 17:16:56"
"10" "127.0.0.1" "2018-01-31 17:26:35"
“99” “155.166.11.2” “2018-01-31 17:26:55”
“13” “12.11.22.4” “2018-01-31 17:43:56”
"18" "145.106.11.2" "2018-01-31 18:50:18"
"11" "144.11.11.4" "2018-01-31 18:54:18"


Result:
"10, 99, 122" is the same user.
"11, 13" is the same user.

    1 answer 1

    select ip, group_concat(distinct id) from `Access-Log` group by ip having count(distinct id)>1 

    But the loading of course will be large, whatever one may say, but the whole table must be read and sorted. The presence of an index on the ip field may reduce it. If this is often done then you need to think about how to store this information separately.

    And of course, you need to understand that the same ip does not mean that it is the same user. Maybe he comes out through the mobile operator, and they change the user's ip 10 times a day and that the most unpleasant one and the same ip is given out to many different users. And fixed-line operators can also issue ip as they please, including the simultaneous release to the Internet of hundreds of users with one address.