Nope. 2nd scenario is not the same as the first.
Remember these (assuming 'delay thread creation until needed' is selected)
Thread Creation Rate = ( Ramp up Period ) / (No of Threads).
Thread is executed independently.
First Scenario:
Thread Creation Rate = 1 thread / sec. Every second, a thread is created. So after 100 sec, you will have 100 threads/users.
Once the first thread is created, it sends the first the request. once it completes, it does not wait unless you have have explicitly set a timer. Since the loop count is 2, it sends another request. Here, each user sends 2 requests to the server. But the second request is sent only after the first request is complete. but it does not matter other threads have the sent the requests/got their responses or not.
Second Scenario:
Thread Creation Rate = 1 thread / sec. So after 200 sec, you will have 200 threads/users.
Here each sends only one request to the server.
What is the difference?
Lets assume, the server usually takes 300 seconds to process a request.
First Scenario:
After 100 seconds, 100 users have sent requests to the server. As each request is processed in 300 seconds, after 100 seconds, 100 users wait for the server to respond. They do not send any other request until any of the users got a response. Even after 200 seconds, the serer has only 100 concurrent users.
Second Scenario:
But, here, The server has 200 concurrent users after 200 seconds. we have more load on the server compared to the first scenario. Response time of the server might be more compared to the First scenario as the load is more.