Dumb question, I know, but I've got a developer insisting it works other than how I know it does.
Server environment is Windows/C#/.NET
There are methods in the "codebehind" -- I've only ever heard that term used for Windows servers -- that are not sent as javascript. So when you click a button, the browser does not have the code that will be executed. Which is exactly as I would expect.
What this guy is saying is that a screen scraper can not manipulate this web app, because the scraper does not know what code is being executed on the server.
My understanding of HTML is that the browser has to send messages via HTTP, and that the target and all parameters passed can be determined by examining the HTML/javascript that's in the browser.
I suspect this is partially a terminology mismatch and he's not understanding what I'm saying. But he insists that you can write a site in .NET and hide all the functionality in the codebehind, so that the browser interaction can not be automated because the browser doesn't know what's being done on the back end.
Am I correct to think this guy comes from a Windows desktop background and doesn't understand the fundamentals of HTML/HTTP? Or has Microsoft somehow managed to create a system that works with other browsers without sending any information to the browser about what it needs to do?