6
votes

Possible Duplicate:
( POD )freeing memory : is delete[] equal to delete ?

Does delete deallocate the elements beyond the first in an array?

char *s = new char[n];
delete s;

Does it matter in the above case seeing as all the elements of s are allocated contiguously, and it shouldn't be possible to delete only a portion of the array?

For more complex types, would delete call the destructor of objects beyond the first one?

Object *p = new Object[n];
delete p;

How can delete[] deduce the number of Objects beyond the first, wouldn't this mean it must know the size of the allocated memory region? What if the memory region was allocated with some overhang for performance reasons? For example one could assume that not all allocators would provide a granularity of a single byte. Then any particular allocation could exceed the required size for each element by a whole element or more.

For primitive types, such as char, int, is there any difference between:

int *p = new int[n];
delete p;
delete[] p;
free p;

Except for the routes taken by the respective calls through the delete->free deallocation machinery?

10
i don't believe this is a duplicate, i'm asking some very specific different questions, and have no interest in assembly connotationsMatt Joiner
No, it is a duplicate. you are asking the exact same question "am I allowed to substitute delete for delete[]". And the answer is the same as in all the previous threads where the same was asked: "No, you are not. it is undefined behavior".jalf
If you want to ask other questions (like "how does delete[] know how many objects to delete", then create a new question for that, give it its own title. So that others who want to ask the same will be able to find it.jalf

10 Answers

15
votes

It's undefined behaviour (most likely will corrupt heap or crash the program immediately) and you should never do it. Only free memory with a primitive corresponding to the one used to allocate that memory.

Violating this rule may lead to proper functioning by coincidence, but the program can break once anything is changed - the compiler, the runtime, the compiler settings. You should never rely on such proper functioning and expect it.

delete[] uses compiler-specific service data for determining the number of elements. Usually a bigger block is allocated when new[] is called, the number is stored at the beginning and the caller is given the address behind the stored number. Anyway delete[] relies on the block being allocated by new[], not anything else. If you pair anything except new[] with delete[] or vice versa you run into undefined behaviour.

10
votes

Read the FAQ: 16.3 Can I free() pointers allocated with new? Can I delete pointers allocated with malloc()?

Does it matter in the above case seeing as all the elements of s are allocated contiguously, and it shouldn't be possible to delete only a portion of the array?

Yes it does.

How can delete[] deduce the number of Objects beyond the first, wouldn't this mean it must know the size of the allocated memory region?

The compiler needs to know. See FAQ 16.11

Because the compiler stores that information.

What I mean is the compiler needs different deletes to generate appropriate book-keeping code. I hope this is clear now.

4
votes

Yes, this is dangerous!

Dont do it!

It will lead to programm crashes or even worse behavior!

For objects allocated with new you MUST use delete;

For objects allocated with new [] you MUST use delete [];

For objects allocated with malloc() or calloc() you MUST use free();

Be aware also that for all these cases its illegal to delete/free a already deleted/freed pointer a second time. free may also NOT be called with null. calling delete/delete[] with NULL is legal.

3
votes

Yes, there's a real practical danger. Even implementation details aside, remember that operator new/operator delete and operator new[]/operator delete[] functions can be replaced completely independently. For this reason, it is wise to think of new/delete, new[]/delete[], malloc/free etc. as different, completely independent methods of memory allocaton, which have absolutely nothing in common.

2
votes

Raymond Chen (Microsoft developer) has an in-depth article covering scaler vs. vector deletes, and gives some background to the differences. See:

http://blogs.msdn.com/oldnewthing/archive/2004/02/03/66660.aspx

2
votes

Does delete deallocate the elements beyond the first in an array?

No. delete will deallocate only the first element regardless on which compiler you do this. It may work in some cases but that's co-incidental.

Does it matter in the above case seeing as all the elements of s are allocated contiguously, and it shouldn't be possible to delete only a portion of the array?

Depends on how the memory is marke as free. Again implementation dependant.

For more complex types, would delete call the destructor of objects beyond the first one?

No. Try this:

#include <cstdio>

class DelTest {
    static int next;
    int i;
public:
    DelTest() : i(next++) { printf("Allocated %d\n", i); }
    ~DelTest(){ printf("Deleted %d\n", i); }
};

int DelTest::next = 0;

int main(){
    DelTest *p = new DelTest[5];
    delete p;
    return 0;
}

How can delete[] deduce the number of Objects beyond the first, wouldn't this mean it must know the size of the allocated memory region?

Yes, the size is stored some place. Where it is stored depends on implementation. Example, the allocator could store the size in a header preceding the allocated address.

What if the memory region was allocated with some overhang for performance reasons? For example one could assume that not all allocators would provide a granularity of a single byte. Then any particular allocation could exceed the required size for each element by a whole element or more.

It is for this reason that the returned address is made to align to word boundaries. The "overhang" can be seen using the sizeof operator and applies to objects on the stack as well.

For primitive types, such as char, int, is there any difference between ...?

Yes. malloc and new could be using separate blocks of memory. Even if this were not the case, it's a good practice not to assume they are the same.

1
votes

It's undefined behavior. Hence, the anser is: yes, there could be danger. And it's impossible to predict exactly what will trigger problems. Even if it works one time, will it work again? Does it depend on the type? Element count?

1
votes

For primitive types, such as char, int, is there any difference between:

I'd say you'll get undefined behaviour. So you shouldn't count on stable behaviour. You should always use new/delete, new[]/delete[] and malloc/free pairs.

0
votes

Although it might seem in some logic way that you can mix new[] and free or delete instead of delete[], this is under the assumption about the compiler being a fairly simplistic, i.e., that it will always use malloc() to implement the memory allocation for new[].

The problem is that if your compiler has a smart enough optimizer it might see that there is no "delete[]" corresponding to the new[] for the object you created. It might therefore assume that it can fetch the memory for it from anywhere, including the stack in order to save the cost of calling the real malloc() for the new[]. Then when you try to call free() or the wrong kind of delete on it, it is likely to malfunction hard.

0
votes

Step 1 read this: what-is-the-difference-between-new-delete-and-malloc-free

You are only looking at what you see on the developer side.
What you are not considering is how the std lib does memory management.

The first difference is that new and malloc allocate memroy from two different areas in memory (New from FreeStore and malloc from Heap (Don't focus on the names they are both basically heaps, those are just there official names from the standard)). If you allocate from one and de-allocate to the other you will messs up the data structures used to manage the memory (there is no gurantee they will use the same structure for memory management).

When you allocate a block like this:

int*   x= new int; // 0x32

Memory May look like this: It probably wont since I made this up without thinking that hard.

Memory   Value      Comment
0x08     0x40       // Chunk Size  
0x16     0x10000008 // Free list for Chunk size 40
0x24     0x08       // Block Size
0x32     ??         // Address returned by New.
0x40     0x08       // Pointer back to head block.
0x48     0x0x32     // Link to next item in a chain of somthing.

The point is that there is a lot more information in the allocated block than just the int you allocated to handle memory management.

The standard does not specify how this is done becuase (in C/C++ style) they did not want to inpinge on the compiler/library manufacturers ability to implement the most effecient memory management method for there architecture.

Taking this into account you want the manufacturer the ability to distinguish array allocation/deallocation from normal allocation/deallocation so that it is possable to make it as effecient as possable for both types independantly. As a result you can not mix and match as internally they may use different data structures.

If you actually analyse the memory allocation differences between C and C++ applications you find that they are very different. And thus it is not unresonable to use completely different techniques of memory management to optimise for the application type. This is another reason to prefer new over malloc() in C++ as it will probably be more effecient (The more important reason though will always be to reducing complexity (IMO)).