cancel
Showing results for 
Search instead for 
Did you mean: 

OEE1052 outbound enabler was unable to write to a OE_DNCHANNEL_NOOP (PUNCTURE)

0 Kudos

We are in the process of upgrading MobiLink from v11 to v16.

I have run into this issue now in 3 environments:

- initial v16 installation (resolved)
- migration to test environment (resolved)

- migration to production environment (not resolved)

I cannot get the Relay Server communicating with the Outbound Enabler.  The sequence of start commands is as follows:

- Server1

    - Start RS

    - log entries look fine as it waits for RSOE

- Server2

    - Start ML

    - log entry looks fine as it waits for RSOE

- Server2

    - Start RSOE

    - connects successfully to backend server (ML)

    - connects successfully to relay server (RS)

The after 2-3 minutes the following errors are encountered:

- from RSOE log

    - OEE1052: The outbound enabler was unable to write a OE_DNCHANNEL_NOOP(PUNCTURE) packet to the Relay Server because of [MCL9: Unable to write 4133 bytes. Network Error: An established connection was aborted by the software in your machine. (winsock error code: 10053).]

    - OEE1031: The Outbound Enabler access was denied by the Relay Server

    - OEE1036: A network connection was closed by the Relay Server or an intermediary while the Outbound Enabler was reading from it

    - these messages continue repeatedly

- from RS log

    - RSE3003: Redundant outbound enabler connection for backend server 'servername' in backend farm 'farmname' was ignored

    - RSE3005: Mismatched outbound enabler instance for backend server 'servername' in backend farm 'farmname'

    - RSE3009: Communication error [SYS1229: An operation was attempted on a nonexistent nextwork connection...] error while writing to up channel of backend server 'servername' in backend farm 'farmname'

    - these messages continue repeatedly

Does anyone have experience with this situation to help me resolve.  Thanks in advance.

View Entire Topic
regdomaratzki
Advisor
Advisor
0 Kudos

My instinct from your description is either that the IIS Server has disconnected the connections made by the OE to the Relay Server running on IIS, or there is an HTTP intermediary between the RS + OE that severing the connections.  We'll need some more detailed data to get to the root of the problem.

1) What is the network path from the Outbound Enabler to the IIS Server on which the Relay Server is running?  If there any firewalls or proxies between the RS + OE I would look in there first to try and see if it is the HTTP intermediary that is closing the connection.

2) Can you please increase the verbosity of the Relay Server log and the Outbound Enabler log to 5 and then redo the test you describe? I like that the test you starts with nothing running, so please do that again this time.  Once you start getting the errors, shut everything down again and then please post the following logs to the forum :

- The HTTPERR log that covers the duration of the test from the IIS Machine (typically located at c:\windows\system32\LogFiles\HTTPERR)

- The IIS Access Log that covers the duration of the test (typically located at c:\inetpub\logs\logfiles\w3svc1)

- The applicationHost.config file for IIS (typically located at c:\windows\system32\inetsrv\config).

- The output from the "appcmd list config" command, run on the machine where IIS is running.  appcmd isn't always in the path, but is typically located at c:\windows\system32\inetsrv.

- The verbosity 5 Relay Server Log that covers the duration of the test

- The verbosity 5 Outbound Enabler Log that covers the duration of the test.

Thanks,

Reg Domaratzki

0 Kudos

Thank you so much Reg.  I will get the files you have request and post back to the forum right away.  Your response is much appreciated.

0 Kudos

Your first suggestion was to look for firewall issues.  There appears to NOT be any issues with IP and port entries.  However, I do not understand what is meant by "HTTP Intermediary".  Is this something you can expand upon as I get other trace files attached to this thread.  Thanks

regdomaratzki
Advisor
Advisor
0 Kudos

By "HTTP Intermediary" I mean any process that intercepts HTTP traffic (either at the HTTP layer or maybe lower down the stack at the TCP/IP layer) and forwards it on.

Firewalls, [reverse] proxies and load balancers are all examples of what I would call an HTTP intermediary.

Reg

0 Kudos

Hi Reg,

As per request, I am attaching the following files:

- HTTERR log - n/a - has not been opened since June of 2012

- u_ex160301.log

- ApplicationHostConfig.txt

- AppcmdListConfig.txt

- Rsoe.16030102.olg

- Ml.16030102.mls

- rs.16030102.nrl

- rsoe.Config.txt

Thanks

0 Kudos

IIS Log

0 Kudos

AppliucationHost.config

0 Kudos

AppcmdListConfig.txt

0 Kudos

RSOE log

0 Kudos

ML log

0 Kudos

RS log

0 Kudos

RSOE config

regdomaratzki
Advisor
Advisor
0 Kudos

It looks to me like IIS is silently terminating the connection and causing the problem.

If you look at the second request made to IIS in the access log you provided, you can see that the IIS Server returned an HTTP 404.13 error on a POST to /rs/server/rs_server.dll.  A 404.13 error means "Content Length Too Large", which is often an indication that the maxAllowedContentLength clause hasn't been added to the IIS configuration, as detailed in point #5 in the documentation on how to deploy the Relay Server on IIS.

From looking at the applicationHost.config file you supplied, it does not look like this clause has been added.

Reg

0 Kudos

Thanks Reg!

I think you may have found the issue.  I have applied that change, and restarted my "pseudo" environment.  The error messages are no longer being produced, and appears that RS is talking to RSOE.

I cannot take this any further until tomorrow evening when I can connect the outside world to IIS and the RS.

Thanks for your quick response, I will update this thread after our conversion tomorrow evening.

0 Kudos

Reg...

This was our problem.  Your help fixed the issue.

We have now migrated the production environment to v16 MobiLink.  Thank you so much for your professional and timely responses on this thread.  Very much appreciated.

Vance McColl