Arc Forumnew | comments | leaders | submitlogin
2 points by rocketnia 4618 days ago | link | parent

Are you uploading a text file encoded in UTF-8, as 'readc expects? It's probably waiting for the rest of a character.

If 'readb gives you issues too, then that's not it.



1 point by lark 4615 days ago | link

readb gets stuck too.

Also, is there something like aform-multi that does not use fnids? I need a static upload url.

-----

2 points by lark 4615 days ago | link

This works at least:

  (mac form-multi (n . body)
    (w/uniq ga
      `(tag (form method 'post
                  enctype "multipart/form-data"
                  action ,n)
         ,@body)))

-----

2 points by lark 4615 days ago | link

Only 389120 bytes are read with a readb.

This is precisely 380 * 1024, which is suspicious.

-----

1 point by akkartik 4615 days ago | link

I can read files past the 380K limit. Here's me reading a 400K file.

  arc> (= i 0)
  arc> (w/infile in "x" (whilet c readb.in ++.i))
  arc> i
  400000
Does this work for you?

-----

1 point by lark 4614 days ago | link

This example works for me, but it's not an example that reads a file that is uploaded. It's an example that reads a file that is local.

Can you read the file that was POSTed and save it? Ignore parsing out the multipart stuff. Just save what came in through the request.

Here's the test at /upload_is_broken

  (mac form-multi (n . body)
    (w/uniq ga
      `(tag (form method 'post
                  enctype "multipart/form-data"
                  action ,n)
         ,@body)))

  (defop upload_is_broken req
    (form-multi "upload_static"                           
      (gentag input type 'file name 'name)
      (submit "upload")))

  (defop upload_static req
     (withs (n req!clen
               f (string "/dev/shm/" (uniq)))
            (pr "saving multipart data\n")
            (pr "clen is " n "\n")
            (w/outfile o f
                       (whilet c (and (> n 0) (readb req!in))
                               (-- n)
                               ;;(pr "n now is " n)                                                                         
                               (writeb c o)))
            (pr "SAVED multipart data. Success!!!\n")))
This does not work for me.

-----

1 point by akkartik 4614 days ago | link

It worked fine for me:

  saving multipart data clen is 1591333 SAVED multipart data. Success!!!
What's the specific file you're uploading?

-----

1 point by lark 4614 days ago | link

I misled you, I'm sorry. The example I just provided works for me.

http://tyche.pu-toyama.ac.jp/~a-urasim/lsvl/data/bzip2_1.0.5...

  $ md5sum bzip2_1.0.5.orig.tar.gz 
  3c15a0c8d1d3ee1c46a1634d00617b1a  bzip2_1.0.5.orig.tar.gz

  saving multipart data clen is 841607 SAVED multipart data. Success!!!
Yet the full app I have that uses this logic does not work. I'm not sure I can explain why.

-----

1 point by akkartik 4614 days ago | link

Thanks for the update. If you manage to narrow it down to a new code sample I'd love to see it.

-----

2 points by lark 4614 days ago | link

I verified that the example code I just provided does not work if nginx 0.7.67-3+squeeze2 from Debian is proxying connections to Anarki with the following configuration:

  server {
    listen 80;
    server_name  somewebsite.com;
    access_log /var/log/nginx/somewebsite.com.access.log;

    location / {
      proxy_pass        http://somewebsite.com:2012;
      proxy_set_header  X-Real-IP  $remote_addr;
    }
  }

-----

1 point by akkartik 4613 days ago | link

That's really useful, thanks. A quick google for 'nginx post length limit' brings up this link: http://www.rockia.com/2011/01/how-to-change-nginx-file-uploa... which suggests changing client_max_body_size. Does that help?

-----

1 point by lark 4612 days ago | link

Thanks for the link.

I tried setting the following in /etc/nginx/sites-available/somewebsite.com:

  server {
  # ... various vars as in http://arclanguage.org/item?id=16317 plus the following:
  client_max_body_size 10m;
  }
I also tried with client_max_body_size 10000000;

In both cases uploading the bzip2 file hangs.

I also tried setting client_max_body_size in /etc/nginx/nginx.conf but there get a different error:

  Restarting nginx: [emerg]: unknown directive "client_max_body_size" in /etc/nginx/nginx.conf:31
So it doesn't work for me. The documentation at http://nginx.org/en/docs/http/ngx_http_core_module.html#clie... suggests that the default value is 1m. This means that bzip2, which is under 1m, should be uploaded without a problem.

Update: Tried setting "client_max_body_size 32m;" under the "http" section in /etc/nginx/nginx.conf but posting still hangs.

-----

1 point by akkartik 4604 days ago | link

Ok I finally got around to trying to replicate this, and you're right, that setting doesn't seem to make a difference.

Sorry this took so long.

-----

1 point by akkartik 4612 days ago | link

I assume you restarted the nginx service?

-----

1 point by lark 4612 days ago | link

Yes. Does upload in Anarki behind nginx work for you?

-----