← SSLOCSD

Clients/SSLOCSD/slack/2026/03/2026-03-17_south-county.md

slack
Source
3
Chunks
17
Entities
Doc
Type

Content

# #south-county — 2026-03-17 **08:52 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773762752491339):** let me know when you have some time there today **08:52 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773762763627599):** also let me know if you need some assistance on the HMI **08:55 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773762953351669):** complaints of the day 1. the primary sludge pumps alarmed again this morning 2. the plc communnications alarm on the old alarm board was lit up this morning but no comm alarm was sent 3. influent plc comm is out **08:56 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773762983755059):** i noticed influent PLC being out. was trying to check the time settings on it **08:56 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773763005696459):** look at the alarm, see if it says communication failure **08:58 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773763108350229):** look at the alarm on scada? **08:58 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773763127587489):** alarm history on scada. for the sludge pumps. **08:58 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773763131765109):** copy **08:59 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773763158988159):** I'll dive into this. sounds like you are busy with Reno **08:59 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773763180828659):** he has a due date on this of 3/27 and im not sure how much work its going to be yet **08:59 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773763189524719):** copy **09:01 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773763268500759):** i dont like all these weird comm issues.. i think we need to get rockwell involved, but also feel like we should upgrade the server and FT View to the latest version and patches before calling them.. i think they are going to want us on the latest patch **09:16 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773764169444099):** a reboot of the plc and radio at the headworks plc 192.168.1.20 seems to have brought data back online to scada **09:16 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773764199851929):** do you think the PLC was locked up? or was it a radio issue/ **09:16 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773764204708739):** or did they get power cycled together **09:17 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773764261533639):** I cycled them together. Radio seemed fine via leds, but Im just in fix it mode due to time crunch **09:17 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773764273674509):** i hear ya **09:18 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773764281217219):** glad it came back up **09:18 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773764304181009):** i can check the DST setting on that one now. **09:18 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773764309742209):** or when i get a minute **09:18 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773764333761029):** this one object keeps giving me an error for the scr1101 when I try to open it. Should I recreate it? ![[F0AM1UW7BN1_image.png]] **09:20 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773764442267939):** I reloaded the client **09:20 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773764444719719):** it fixed it **09:20 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773764446535979):** good **09:21 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773764507439999):** now on to the primary sludge pump failure alarms. I'm going to unplug the eth cable to them and see if I get a fault **09:22 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773764523313589):** ok let me know what you see **09:32 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773765135921829):** There were faults on pump 1 and pump 3 vfds for Enet Loss. Do you think Win911 will recycle alarm call for previous alarms every 24 hours? **09:32 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773765162572159):** what do you mean? **09:33 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773765208216139):** I think there is a function the "renews" a previously acknowledged alarm if it hasn't cleared. **09:34 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773765242309609):** hm, im not sure. **09:35 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773765304626569):** although I'm not even sure how these are alarming, there are no alarms configured? ![[F0AM5GH2Q4A_image.png]] **09:36 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773765364637739):** in the alarm history it should show the tag that its alarming from i think **09:36 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773765382453779):** there is no comm fail alarm there. as you can see **09:36 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773765386733579):** ![[F0ALZ5JH4V9_image.png]] **09:36 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773765413347369):** when yo look at the details does it sam communicaito failure? **09:37 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773765433637219):** ![[F0ALZ5RTQ8K_image.png]] **09:38 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773765481309779):** yeah. thats what i saw last time as well. i think we need to ask rockwell about that communication failure thing **09:38 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773765501033869):** ![[F0AMJEPC3ED_image.png]] **09:38 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773765516003929):** pump 2 says different **09:38 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773765524589899):** but both came in at the same time **09:39 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773765553353169):** gotta be comms, but what's odd is that p2 and p4 alarmed, but I saw p1 and p3 had the faults locally **09:39 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773765576263059):** ya thats weird **09:41 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773765713000279):** same two pumps alarmed on the 15th ![[F0ALQ4ZVD0X_image.png]] **09:43 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773765805951699):** then I just noticed this popped up ![[F0AM5JHPKFC_image.png]] **09:44 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773765860354959):** we should probably call support now instead of the end of the day **09:45 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773765932695499):** I also get constant network alerts from the monitor I set up **09:46 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773765986174139):** anything specific from those alerts? **09:49 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773766166226889):** Looks like just ping failure. **10:01 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773766916992399):** ![[F0ALQAB247R_Image_from_iOS]] **10:04 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773767058404719):** investigating primary sludge pump vfd intermittent ethernet fail alarms ips of these vfds end in 101, 102, 103, 104. asked AutoBot if there are any other ips with those numbers in them and it came back with an HMI at 101 and Historian at 104. I'm sure these are not the actual IPs now, but I"m going to investigate **10:05 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773767100606519):** Ok. Let's find the sources and update if it’s old info **10:05 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773767132978349):** never mind, it brought up some other ip addresses from something other than south county, even though I have south county selected **10:06 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773767181557779):** now logix crashed. **10:06 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773767189056689):** gonna reboot EWS **10:08 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773767334388409):** Ok **10:11 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773767464366629):** the packet loss error message is for .71 which is the influent pump station plc, not the mcc plc which poles these sludge pump vfds **10:18 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773767908996599):** @Mason Radke do you see any issue bumping up the RPI on these vfds from 250ms to 500ms? **10:24 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773768266837499):** No **10:24 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773768287978069):** At this point let’s try whatever it takes to give us more info **10:29 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773768548434649):** also setting enet loss action to STOP from FAULT so that it won't require a hard reset ![[F0AN08JHTL0_image.png]] **10:34 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773768865802169):** as a note, I'm beginning to think this switch in the influent pump plc maybe the bottleneck. It has the wireless comms on it as well. just a hunch **10:38 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773769104713609):** Let’s be sure to note that as something to test **10:39 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773769190317199):** Maybe you’re onto something with that setting. If the drive loses connection to the network, it will fault that could be part of the puzzle. **10:58 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773770280304289):** tracking down packet loss alarms to .70 influent plc. going to inhibit these 4 non exist vfds ![[F0AM45RBRGE_image.png]] **10:58 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773770318367359):** actually, only the RECWTR vfd was still enabled **11:00 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773770449813729):** I guess this isn't a big deal since it's connected having the IP subnet 255.255.0.0 rather than 255.255.255.0? ![[F0AN0G4BQF2_image.png]] **11:01 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773770508525439):** port diagnostic on influent plc .70 seems fine ![[F0AN0GM57QQ_image.png]] **11:04 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773770696874439):** Subnet shouldn’t affect anything, but we should probably fix it anyway **11:18 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773771538017019):** Record of the last packet drops ![[F0AM4AWLNAJ_image.png]] **11:19 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773771568218199):** downloaded the FT diagnostic log as XML. gonna feed to Claude for analysis **11:25 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773771959027239):** Great idea **12:01 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773774118800039):** some very interesting analysis incoming **12:07 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773774468838719):** [[F0AN123H0E4_FT_Diagnostic_Analysis_2026-03-17.pdf]] **12:07 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773774479155849):** this is only for 2/1 to today **12:17 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773775071333079):** hmmm. AutoBot says .90 is an access point. Investigating **12:18 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773775103789569):** on a call with reno again.. standby **12:18 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773775136888809):** yeah no problem, just logging **12:19 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773775148368909):** .90 is the CCT PLC **12:19 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773775169054389):** we need to remove or update the bad info **12:21 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773775292715619):** since inhibiting the RECWTR vfd in .70 and increasing RPI of the 4 sludge pump vfds in .71 the log looks clean so far. Running a long ping to .90 right now **13:10 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773778251654739):** now working on loading this hmi **13:10 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773778256327109):** progress ![[F0AMAQQGJ0L_image.png]] **13:12 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773778373794479):** Look at you go! **13:13 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773778383737819):** Knocking em out today **13:28 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773779308251299):** Great success ![[F0AM0USHPLK_PXL_20260317_202553597.jpg]] **13:36 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773779761976689):** Nice!! Man we spent a whole day on a broken HMi **13:56 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773781016315819):** yep, two of us lol. I mentioned that to Mychal and he shrugged it off, so I guess we are ok **13:58 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773781116459539):** btw, it still doesn't ping though. **14:14 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773782068670789):** Good **14:14 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773782079246209):** I would love to get that network cleaned up. What is the prognosis **14:25 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773782722741389):** I haven’t had a chance to look through the report yet **14:26 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773782798332509):** It seems ok for now. Read the report when you can. **15:57 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773788279826729):** Wow great analysis of the logs **15:58 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773788289829569):** What recommendations did it suggest to make? **18:57 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773799062908629):** ![[F0AMCGQFQ20_Image_from_iOS]] **18:58 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773799109518119):** These have been coming in a lot more this afternoon. **18:58 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773799116988129):** Anything I can check on?

Extracted Entities

TypeKeyValueConfidenceEvidence
contact Person Mason Radke 100% 08:52 [Mason Radke](https://slack.com/archives/C08G4KZG7D5/p1773762752491339)
contact Person Kevin 100% 08:55 [Kevin](https://slack.com/archives/C08G4KZG7D5/p1773762953351669)
server Headworks PLC IP 192.168.1.20 100% a reboot of the plc and radio at the headworks plc 192.168.1.20
server VFD IPs {192.168.1.101,192.168.1.102,192.168.1.103,192.168.1.104} 90% ips of these vfds end in 101, 102, 103, 104
server CCT PLC IP 192.168.1.90 90% .90 is the CCT PLC
server Influent Pump Station PLC IP 192.168.1.70 90% tracking down packet loss alarms to .70 influent plc
server MCC PLC IP 192.168.1.71 90% the packet loss error message is for .71 which is the influent pump station plc, not the mcc plc
server HMI IP 192.168.1.101 80% AutoBot if there are any other ips with those numbers ... HMI at 101
server Historian IP 192.168.1.104 80% AutoBot ... Historian at 104
site Client Plant South County 100% #south-county — 2026-03-17
system SCADA System FactoryTalk View 90% upgrade the server and FT View to the latest version and patches
system PLC Brand Rockwell 90% i think we need to get rockwell involved
task Reno Work Due Date 2026-03-27 100% he has a due date on this of 3/27
task Upgrade Server and FT View Upgrade to latest version and patches before calling Rockwell 90% we should upgrade the server and FT View to the latest version and patches before calling them
task Set Enet Loss Action Set enet loss action to STOP from FAULT on VFDs 90% also setting enet loss action to STOP from FAULT
task Increase VFD RPI Increase RPI on sludge pump VFDs from 250ms to 500ms 90% do you see any issue bumping up the RPI on these vfds from 250ms to 500ms?
task Test Switch on Influent Pump PLC Test suspected bottleneck switch on influent pump PLC 80% this switch in the influent pump plc maybe the bottleneck
File: Clients/SSLOCSD/slack/2026/03/2026-03-17_south-county.md
Updated: 2026-03-18 02:00:24.914025