That's normal. Here's some relevant command output from a 15600 upgraded from R81.10 (maybe R80.40, I forget) to R81.20 with a healthy RAID:
[Expert@SomeCluster1 STANDBY]# fw ver
This is Check Point's software version R81.20 - Build 012
[Expert@SomeCluster1 STANDBY]# raid_diagnostic
Raid status:
VolumeID:0 RaidLevel: RAID-1 NumberOfDisks:2 RaidSize:447GB State:OPTIMAL Flags:ENABLED
DiskID:0 DiskNumber:0 Vendor:ATA ProductID:SAMSUNG MZ7KM480 Revision:104Q Size:447GB State:ONLINE Flags:NONE
DiskID:1 DiskNumber:1 Vendor:ATA ProductID:SAMSUNG MZ7KM480 Revision:104Q Size:447GB State:ONLINE Flags:NONE
[Expert@SomeCluster1 STANDBY]# fdisk -l
Disk /dev/sda: 480.1 GB, 480103981056 bytes, 937703088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0008f6de
Device Boot Start End Blocks Id System
/dev/sda1 * 63 610469 305203+ fd Linux raid autodetect
/dev/sda2 610470 67713974 33551752+ fd Linux raid autodetect
/dev/sda3 67713975 937697984 434992005 fd Linux raid autodetect
Disk /dev/sdb: 480.1 GB, 480103981056 bytes, 937703088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0008f6de
Device Boot Start End Blocks Id System
/dev/sdb1 * 63 610469 305203+ fd Linux raid autodetect
/dev/sdb2 610470 67713974 33551752+ fd Linux raid autodetect
/dev/sdb3 67713975 937697984 434992005 fd Linux raid autodetect
Disk /dev/md0: 312 MB, 312410112 bytes, 610176 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md1: 34.4 GB, 34356920320 bytes, 67103360 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md2: 445.4 GB, 445431742464 bytes, 869983872 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[Expert@SomeCluster1 STANDBY]# mdadm --misc -Q --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Wed Aug 8 06:10:37 2018
Raid Level : raid1
Array Size : 305088 (297.94 MiB 312.41 MB)
Used Dev Size : 305088 (297.94 MiB 312.41 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu May 2 10:36:18 2024
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
UUID : 00112233:44556677:8899aabb:ccddeefd
Events : 0.36
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
[Expert@SomeCluster1 STANDBY]# mdadm --misc -Q --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Wed Aug 8 06:10:32 2018
Raid Level : raid1
Array Size : 33551680 (32.00 GiB 34.36 GB)
Used Dev Size : 33551680 (32.00 GiB 34.36 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Mar 19 00:23:22 2024
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
UUID : 00112233:44556677:8899aabb:ccddeefe
Events : 0.8
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
[Expert@SomeCluster1 STANDBY]# mdadm --misc -Q --detail /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Wed Aug 8 06:10:32 2018
Raid Level : raid1
Array Size : 434991936 (414.84 GiB 445.43 GB)
Used Dev Size : 434991936 (414.84 GiB 445.43 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Fri May 3 09:31:11 2024
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
UUID : 00112233:44556677:8899aabb:ccddeeff
Events : 0.766
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
[Expert@SomeCluster1 STANDBY]#
On boot, the system tries to identify all disks. Any disks which are part of an existing md(4) set are attached to that set. Any disks which aren't part of an existing set instead result in a new set being created with the new disk attached to it. This is the problem you have hit.
It's possible to fix this live, but risky. It's far simpler to shut down, remove the new drive, boot the system, attach the new drive (after you can log in), then probe the SATA links using the command in sk157874.