The bugs came out again today. Similar to the episode in mid-August, no one seems to know what happened. It came like a soft rumbling in the ground, like a coming earthquake. Luckily the scale did not tip over. The first inkling came at dawn when the support staff called me at home as I was getting ready for work. My mobile phone rang at about 7:30 am when I was dressing. Earlier, he received a call from the warehouse at 6:30 am and tried the usual next steps to get it running but the program just didn’t work. So he called me an hour later after his futile attempts. I did not check my laptop right away, thinking I had time to have a quick breakfast and drive to work. My plan was to have everything fixed by 8:30 am; not too bad a delay for the warehouse. But as I got to my desk at 8 am, it looked like something had changed. It was acting strangely. We were in for a wild ride today.
We got to work checking everything in the server, logging into systems and making phone calls to the warehouse. The support staff walked over to my desk as we fiddled with the keyboards, looking at the computer screens for any sign of a logical solution. But it was a mystery. The system was hanging in the warehouse, stalled by some invisible barrier that we could not find. It seemed like some malevolent virus was stalking the network, mischievously delaying the packets of data whizzing by, creating havoc at the warehouse. We just could not find what was wrong. We tried the usual steps again which the staff had done earlier. It seemed to have stopped working but suddenly it started to work, sending down the reports from head office down the wire to the warehouse. We had beaten the bug. Or so we thought. We called the warehouse and asked them to try again. But the fellas got the same erroneous response. The program was not working at their end.
So the rest of the day was spent helping the fellas remotely, printing their reports and doing the other stuff that they normally do. Otherwise, work would stop and the product will not be delivered to our loyal customers. So we continued helping throughout the day, and the fellas did not show any irritation except for that sweet old lady, a grandmother actually, who displayed a brief flicker of contempt like a flash of a steel blade sinking into your flesh. A problem ticket was created in the help desk so the root cause could be found and fixed. But like a bumbling idiot, we watched the ticket make its way around the globe, first to the warehouse support staff in Bangalore, India. After chatting with him, he went to check their application, found nothing wrong and transferred the ticket to IBM. But not before alerting his boss here in South Carolina, so we got into a short group chat to make sure it was not caused by their application. So the ticket made its way towards a different group of folks.
Soon the hardware folks were chatting me up on the problem. A person I worked with in past based also in India. So we exchanged screen captures, information on the problem, possible avenues to check and so on. But it was an exchange of polite accusation, trying to point the blame towards each other with civilized courtesy. But we would not budge, sticking to our logical constructs in our minds until another path was found to divert the problem to the application team in Cleveland, Ohio. So sent these folks an email with screen captures of the server errors. So far no reply yet as of this moment. Soon a head honcho from IBM called me, a nice guy actually, with a good reputation and respected by the department. It looked like the wire was not upgraded as planned and the bandwidth was not increased. I was skeptical since I suspected something was adjusted in the back end and were not told. Some confidential security measures were applied to prevent any virus or malware attacks and affecting our program. But it’s my old paranoia, my suspicion that there is some secret conspiracy happening behind my back.
In the meantime, we were still getting phone calls from the warehouse as they tried to get on with their work. A problem came up at the end of the day and I had to get assistance from another Indian colleague beside my cubicle. I walked over to him and explained the problem and he worked on fixing it. Meanwhile, no word from the folks in Nevada. They seemed to have no problem working with their program. It a strange world indeed when two parties are using the same program but only one party having problems. One assumes it is the size of the transactions. But that is the nature of the ‘cloud’ where problems strike in the ether and one does not know where the glitch is. The merry go round with continue tomorrow if the problem is not fixed. But the head honcho said that the bandwidth will increase tonight. We shall see. I had planned to go to the gym tonight but just felt tired. The merry go around is getting to me too. After a while, the folks in Ohio replied to my email and found no problem at their end. The problem ticket remains open in the ether with no ownership.
No comments:
Post a Comment