Gone, but not forgotten

For over twelve years I have worked for an IBM Business Partner in the UK focusing on IBM Collaboration Solutions and I have loved every minute of it but it’s time to move on to a new challenge which is not within the ICS community.

Coming from Domino third level support I crossed over easily to Sametime 6.5.1 which at that time was an add on to the underlying Domino server (still true to an extent now). Sametime was my first love. It should have been easy, right? An additional installer on top of Domino and for many deployments it was and still is although not so much now with WebSphere and DB2 in the mix.

What I loved were the problems Sametime caused or should I say, problems caused when you introduced Sametime to a large user base. I wrestled for many days tuning Sametime for a large deployment of over 40,000, tracing LDAP, debugging the text files and tweaking the sametime.ini. This was a baptism of fire and I loved Sametime more for the pain it caused me. I learnt so much, much of it I still remember and often come across when deploying Sametime for customers.

In 2007 I went to Collaboration University in London as Quickr had been recently released. It was my first introduction to the ICS community. Being in the same place as dozens of others all with the same approach of making Sametime, Quickr and Domino successful was intoxicating. I had already quite a bit of experience of Sametime but it helped to be in the same place as Chris Miller, Carl Tyler, Rob Novak and Warren Elsmore to bolster that knowledge and start learning about Quickr. Quickr took off incredibly quickly being easy to implement and manage which is why it’s still being used now long after it went end of life.

In recent years Connections has been the application that seems to be more in demand so I have seen my time split between the two applications. I remember being introduced to Connections, also in 2007, at a course in Hursley which described deploying and configuring Connections 2.0. At that point there were only six applications and Bookmarks was called Dogear!

Connections is a wonderfully complex set of applications which has come a long way from the days when they were a collection of disparate applications bundled together with WAS acting as the glue. The premise to get people working together better and allow you to find information quickly so you can focus on your job. For many people like me that resonates. I get paid to work with software that allows people to work together better, to formulate relationships with one another and most importantly to share. You might argue the case that is the same of all software but that’s not true. Connections is unique to that extent.

I don’t know whether it was Connections that started my journey or whether it was already something inside of me but sharing is one of the most important aspects of my job. Connections is all about sharing. Information is put into Connections for others to consume. They have a subject of interest and Connections allows them to find a person with knowledge of that subject, to follow them, to communicate with them, to add their take on the subject.

This approach to sharing makes public all your knowledge. No more do you find that people are keeping information in their mail files or P drives, it’s all there to be found. The days when you hoarded your information to make you seem indispensable to your employer are gone. People who are actively sharing their information are now seen to be those who are indispensable.

This sharing concept is underlined by two excellent Skype chat groups for Sametime and Connections. Within these two chat rooms are people such as Gabriella Davies, Robert Farstad, Michele Buccarello, Sharon James, Christoph Stoettner, Keith Brooks, Marco Ensing, Matteo Bisi, Michael Urspringer, Nico Meisenzahl, Roberto Boccadoro, Wannes Rams, Chris Whisonant and many others I haven’t mentioned. They are busy people but they help with problems whenever they have a spare 10 minutes. They share their wisdom and experience with whomever asks regardless of the complexity of question. The underlying sharing ideology runs through all these people, through the software into the wider ICS community.

As I alluded to in the opening paragraph, I am set for a new challenge and searching for the right challenge has taken me outside of the ICS product portfolio but I am staying within the larger IBM sphere. I am joining IBM Resilient working on their security incident response platform which was bought by IBM last year. It looks like an exciting time to be joining what is a growing industry.

I am sad to leave such a wonderful community at such an exciting stage with Pink gaining traction. I strongly believe Pink and it’s underlying platform will be a success especially with the aforementioned people driving the product forward.

Whilst I will soon be gone, the years working with this software will not be forgotten and neither will the friends and colleagues I have made along the way.

Advertisements

Cannot get past Context Roots page in Engagement Center

A few weeks ago I had some problems installing Engagement Centre on my employers internal Connections 5.5 servers. I installed it as I did with a 6.0 Connections server but each time I went to https://connections.acme.com/xcc/main I was redirected with a 302 to https://connections.acme.com/xcc/admin#ContextRoots?redirectUrl=/xcc/main which is the context roots page.

I checked the context roots were correct and they were. I went back to the customization screen and ensured I had saved it.

It still wouldn’t let me go to /xcc/main to start creating pages. I logged a PMR and Charlie Price got involved and reproduced it. It was an embarrassingly easy fix. I needed to go into the context roots screen and click save even though the values were correct and didn’t need changing. After clicking save I could go to /xcc/main and create my pages. Simples.

Cannot update RunAs role in Connections 6.0 with WAS FP10

A few weeks a go I was running a host of updates for a Connections 6.0 customer. One of the changes was to move from a local user to an LDAP user for connectionsAdmin but I got the following error when trying to add the new LDAP user, “User ID or password did not match.”

For anyone who has had to change the administrative user before you know that you do not want to back out after getting so near to the end of the process.

Since it was late in the day I posted this to the ever helpful Skype group of Connections experts and Tobias Grosse chipped in immediately telling me he had found the same thing with the product with the same Domino LDAP. He told me that it is fixed with PI69518 and included in WAS FP11.

After stopping all the JVMs and applying FP11 I was able to update the RunAs role. Phew.

Thanks Tobias.

Limiting resources used by IBM Cloud private and Orient Me

IBM Conductor for Containers has been rebranded IBM Cloud private with version 1.2.0 (https://www.ibm.com/developerworks/community/blogs/fe25b4ef-ea6a-4d86-a629-6f87ccf4649e/entry/IBM_Cloud_private_formerly_IBM_Spectrum_Conductor_for_Containers_version_1_2_0_is_now_available?lang=en)

IBM released version 6.0.0.1 of Orient Me and with it added new applications increasing the total amount of pods in play. Each pod requires some resources to run. Recently there has been some frustration for those who work with Connections trying to get Orient Me up and running on smaller servers for testing purposes or for deployment to SMB customers.

I spent some time looking at how to limit the resources consumed by decreasing the number of pods.

Kubernetes allows you to scale up or down your pods. This can be done on the command line or via the UI

Since I prefer the command line here is how you scale an application and it’s effect on the number of pods. There are two ways in which this is done, by Replica Sets and Stateful Sets. I won’t go into the difference of both because I’m not even wholly sure myself but suffice to say that most of OM applications use Replica Sets.

Replica Sets

I’m using analysisservice as an example because it is at the top when commands are run.

# kubectl get pods
NAME                                                   READY STATUS RESTARTS AGE
analysisservice-1093785398-31ks2 1/1        Running        0             8m
analysisservice-1093785398-hf90j 1/1        Running        0             8m

# kubectl get rs
NAME                                        DESIRED CURRENT READY AGE
analysisservice-1093785398 2                 2                   2            9m

The following command tells K8s to change the number of pods to be 1 that will accept load.

# kubectl scale –replicas=1 rs/analysisservice-1093785398
replicaset “analysisservice-1093785398” scaled

Below shows that just the one pod is ready to accept load. Note that the desired number is two. This means that this will be the default value if all the pods are deleted or the OS restarted.

# kubectl get rs
NAME                                        DESIRED CURRENT READY AGE
analysisservice-1093785398 2                 2                   1            9m

The pod that is going to not accept load is destroyed and a new one replaces it.

# kubectl get pods
NAME                                                   READY STATUS                  RESTARTS AGE
analysisservice-1093785398-31ks2 1/1        Running                  0                   18m
analysisservice-1093785398-4njpn 1/1       Terminating           0                    5m
analysisservice-1093785398-fmnrd 0/1     ContainerCreating 0                   3s

You can see that the new pod is not “ready” and thus not accepting any load.

# kubectl get pods
NAME                                                   READY STATUS RESTARTS AGE
analysisservice-1093785398-31ks2 1/1        Running 0                   19m
analysisservice-1093785398-fmnrd 0/1      Running 0                   43s

The reverse is true and you can scale the number of pods upwards. ICp can do this with policies based on CPU usage creating more pods and then decreasing them when the load drops.

The above approach does not persist over OS restarts or deletion of all the pods. To persist these changes the following steps need to be followed.

# kubectl get deployment
NAME               DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
analysisservice 2                2                   2                        2                      34m

This command amends the deployment configuration which was set in complete.6_0.yaml in the OM binaries.

# kubectl edit deployment analysisservice
apiVersion: extensions/v1beta1
kind: Deployment

This will open in vi though you can change your editor if you prefer. Under the spec section you want to amend the number of replicas

spec:
replicas: 1
selector:
matchLabels:
mService: analysisservice
name: analysisservice
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1

Ignore the status section. Save and close (:wq)

# kubectl get deployment
NAME                    DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
analysisservice      1                   1                   1                      1                       44m

This time the second pod is not listed with a 0/1 ready value. The second pod has been deleted.

# kubectl get pods
NAME                                                     READY STATUS RESTARTS AGE
analysisservice-1093785398-kz76m 1/1        Running  0                   17m

You can use the following command to open all application deployments and update using vi all the applications at one time.

# kubectl edit deployment

When you save and close the applications will be updated in line the values you set for the replicas.

# kubectl get deployment
NAME                                    DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
analysisservice                     1                  1                   1                         1                      55m
haproxy                                 1                  1                   1                         1                      57m
indexingservice                   1                  1                   1                         1                      55m
itm-services                         1                  1                   1                         1                      55m
mail-service                        1                  1                   1                          1                     55m
orient-webclient                1                  1                   1                         1                      55m
people-migrate                  1                  1                   1                         1                      55m
people-relation                  1                  1                   1                         1                      55m
people-scoring                   1                  1                   1                         1                      55m
redis-sentinel                     1                  1                   1                         1                      57m
retrievalservice                  1                  1                   1                         1                      55m
solr1                                     1                  1                   1                         1                      57m
solr2                                    1                  1                   1                         1                      57m
solr3                                    1                  1                   1                         1                      57m
zookeeper-controller-1   1                  1                   1                         1                      57m
zookeeper-controller-2  1                  1                   1                         1                      57m
zookeeper-controller-3  1                  1                   1                         1                      57m

To delete the additional solr and zookeper-controller pods you needs to run the following.

# kubectl delete deployment zookeeper-controller-2 zookeeper-controller-3
# kubectl delete deployment solr2 solr3

Running the following shows the number of pods have decreased by quite a lot.

# kubectl get pods

Checking the ReplicaSets again shows the values have decreased.

# kubectl get rs

Mongo and redis-server do not use Replica Sets, they use StatefulSets.

StatefulSets

The following command shows that there are 3 pods for each application.

# kubectl get statefulsets
NAME          DESIRED CURRENT AGE
mongo          3                 3                  1h
redis-server 3                 3                  1h

In the same vain as before you edit the replicas decreasing/increasing them as you see fit.

# kubectl edit statefulsets
statefulset “mongo” edited
statefulset “redis-server” edited

The end result is that only the one ReplicaSet is configured.

# kubectl get statefulsets
NAME          DESIRED CURRENT AGE
mongo          1                 1                  1h
redis-server 1                 1                  1h

The effect is seen when you list the pods.

# kubectl get pods
NAME              READY STATUS RESTARTS AGE
mongo-0          2/2        Running 0                   1h
redis-server-0 1/1         Running 0                   1h

At install time

These changes can be made at install time by updating the various .yml files in /microservices/hybridcloud/templates/* and /microservices/hybridcloud/templates/complete.6_0.yaml and then running install.sh.

Finally

I have only experimented on the default applications and have not touched those from the kube-system namespace which are the ICp applications and not OM specific.

I haven’t tried this on a working system yet, purely a detached single node running all roles with hostpath configuration.

Since there is no load on the server my measurements with regards to resources consumed pre and post changes is far from scientific but looking at the UI the amount of CPU and memory is certainly less then previously used.

I have no idea as yet whether this will break OM but I will persist and see whether it does or whether it works swimmingly. If anyone tries this out then please feedback to me.

BTW – I restarted the OS and had a couple of problems with analysisservice and indexingservice pods not being ready and shown as unhealthy but after deleting haproxy, redis-server-0 and redis-sentinel all my pods are showing as healthy.

IBM, please please provide a relatively simple way (ideally at install time) for us to cut the deployment down to bare bones maybe a small, medium or large deployment as you do with traditional Connections?

Update 05/07/2017

Once I integrated the server with a working Connections 6.0 server with latest fixes applied the ITM bar did not work. Nico Meisenzahl has also been looking into this and we hope to have a working set up soon

Update 07/07/2017

Nico created a great blog updating the yml files to decrease the amount of pods/containers during installation of Orient Me.

IBM Connections Files plugin not working within Notes when TLSv1.2 is enforced

After enforcing TLSv1.2 on our internal Connections 5.5 servers the Files plugin would not work.

In the IHS logs I would see errors such as

[warn] [client 80.229.222.90] [7f9a700a7060] [21173] SSL0222W: SSL Handshake Failed, No ciphers specified (no shared ciphers or no shared protocols). [xx.xx.xx.xx:62899 -> xxx.xxx.xxx.xxx:443] [09:45:11.000102454] 0ms

Enabling trace on IHS showed that the protocol being used was TLSv1.0 which matched Wireshark output. Oddly Status Updates and Activities plugins use TLSv1.2.

“GET /files/basic/api/library/4a7a7240-8f68-44d8-9447-7410cc2bb467/feed?pageSize=300&acls=true&sI=601 HTTP/1.1” 200 168770 TLS_RSA_WITH_AES_128_CBC_SHA TLSV1

I then had to allow TLSv1.0 until I could get an explanation from IBM.

Finally IBM came back with the following two lines to be added to the notes.ini.

SSL_DISABLE_TLS_10
DISABLE_SSLV3=1

Now in access_log I see TLSv1.2 being used.

“GET /files/basic/api/library/4a7a7240-8f68-44d8-9447-7410cc2bb467/feed?pageSize=300&acls=true&sI=601 HTTP/1.1” 200 168770 TLS_RSA_WITH_AES_128_GCM_SHA256 TLSV1.2

IBM also suggested that I check the following was set in plugin_customization.ini, which it was.

com.ibm.documents.connector.service/ENABLE_SSL=true

The notes.ini values have been pushed out to my colleagues via Domino policies.

Touchpoint problem due to no search index

A new Connections customer got in touch with a raft of problems after an upgrade to Connections 6. One of them was a problem with Touchpoint which stopped them from completing the on boarding process which caused them to repeatedly be directed to Touchpoint. What was happening was that they were able to get two or three screens in to “Add your interests” and then they couldn’t go further and had to use “finish later” or they were faced with “Error during prefetching for step profileTags.”

A quick Google of “profileTags” turned up references to search within Connections. I checked the index (which I hadn’t got around to doing just yet) and I didn’t find INDEX.READY. The search index had not been created due to LTPAToken exceptions which needed the scheduled tasks to be cleared and all clearScheduler.sql scripts run. Once the search index was created Touchpoint worked.

Sametime file transfer not working due to chat logging settings

Internally I transitioned our users over to a new Sametime 9.0.1 Community server with audio and video, meetings, TURN server, the works and it all worked, apart from file transfer.

When opening a chat window, the file icon would show but after about 1-2 seconds it would disappear. If you were quick enough you could send the recipient a file.

I checked all the policies, I checked policies.user.xml, I updated managed-settings.xml, I enabled trace on the client which showed it was enabled and I tried various configurations and all of which show that file transfer (both direct and via the server) was enabled.

The L3 IBM’er came back with the following

Next I looked at sametime.log and I see ST filetransfer is not staying started and suspect this is why the client can’t file transfer and they see the file transfer but show up and then go away::
I stplaces 28/Feb/17, 19:27:41 Places is operating in mode RELAX (1)
I stfiletransfer 28/Feb/17, 19:27:41 ChatLoggingMgr::setMode: mode <1>
E stfiletransfer 28/Feb/17, 19:27:41 Failed to load chatLogging BB or find one of its functions
E stfiletransfer 28/Feb/17, 19:27:41 Logging initialization failed for ChatLog library []
I stfiletransfer 28/Feb/17, 19:27:41 Terminated
I stlogger 28/Feb/17, 19:27:41 Initialization completed
I stchatlogging 28/Feb/17, 19:27:41 ChatLoggingMgr::setMode: mode <1>
E stchatlogging 28/Feb/17, 19:27:41 Failed to load chatLogging BB or find one of its functions
E stchatlogging 28/Feb/17, 19:27:41 Logging initialization failed for ChatLog library []
I stchatlogging 28/Feb/17, 19:27:41 Terminated

Chatlogging being enabled when it actually does not exist causes these type of errors and I see in the stconfig.nsf -> Communityservices document,
that yes chat logging is enabled and in relax mode:

But in sametime.ini there is no chatlogging enablement statements:
[ST_BB_NAMES]
ST_CHAT_LOG=N/A
ST_AUTH_TOKEN=notes
[stofflinemessages]

Potential solution
If no chatlogging software then they need to set the setting Flag: off and Type: 0

I updated the values in the SSC which updated the document in stconfig.nsf and after a restart of the Community server file transfer is now available!

The default is to set the following value to “when available” but setting it to “never” worked for me.

Update – 27/04/17

IBM told me that with 9.0.1, out of the box, this option is disabled by default. As this was a 9.0.0.1 server upgraded to 9.0.1 it may have been the case that 9.0.0.1 had this enabled. Or, I may have set it accidentally….

Update – 28/04/17

IBM posted the Technote yesterday – https://www-01.ibm.com/support/docview.wss?uid=swg22002683&myns=swglotus&mynp=OCSSKTXQ&mync=E&cm_sp=swglotus-_-OCSSKTXQ-_-E