<rdar://problem/13069948>

Major fixed to allow reading files that are over 4GB. The main problems were that the DataExtractor was using 32 bit offsets as a data cursor, and since we mmap all of our object files we could run into cases where if we had a very large core file that was over 4GB, we were running into the 4GB boundary.

So I defined a new "lldb::offset_t" which should be used for all file offsets.

After making this change, I enabled warnings for data loss and for enexpected implicit conversions temporarily and found a ton of things that I fixed.

Any functions that take an index internally, should use "size_t" for any indexes and also should return "size_t" for any sizes of collections.

llvm-svn: 173463
This commit is contained in:
Greg Clayton
2013-01-25 18:06:21 +00:00
parent d0ed6c249d
commit c7bece56fa
248 changed files with 1676 additions and 1817 deletions

View File

@@ -424,7 +424,7 @@ CommandObjectExpression::EvaluateExpression
const char *error_cstr = result_valobj_sp->GetError().AsCString();
if (error_cstr && error_cstr[0])
{
int error_cstr_len = strlen (error_cstr);
const size_t error_cstr_len = strlen (error_cstr);
const bool ends_with_newline = error_cstr[error_cstr_len - 1] == '\n';
if (strstr(error_cstr, "error:") != error_cstr)
error_stream->PutCString ("error: ");