[VX Ace] How can I make saved games record variables containing procs?

Discussion in 'Learning Ruby and RGSSx' started by Balrogic, Jan 11, 2015.

Thread Status:
Not open for further replies.
  1. TheoAllen

    TheoAllen Self-proclaimed jack of all trades Veteran

    Messages:
    4,488
    Likes Received:
    5,095
    Location:
    Riftverse
    First Language:
    Indonesian
    Primarily Uses:
    RMVXA
    #21
    Balrogic likes this.
  2. Balrogic

    Balrogic Veteran Veteran

    Messages:
    40
    Likes Received:
    17
    First Language:
    English
    That's pretty interesting... Looks like there's a reason everyone avoids eval like the plague and it never even got mentioned in the Ruby newbie tutorials I hit up. Definitely worth taking the time to code a temporary proc system if I'm going to make even moderate use of it, then.
     
    #22
  3. Tsukihime

    Tsukihime Veteran Veteran

    Messages:
    8,230
    Likes Received:
    3,071
    Location:
    Toronto
    First Language:
    English
    Given the heavy use of eval, it might be interesting if any eval call could be cached as procs in such a system.


    For example, a parallel process that's calling eval (conditional branches, script calls, move routes, etc) would likely be making a lot of eval calls all the time.


    I know I like to make script calls in my events, even if there is no context involved such as a call to `Input.trigger?:)C)`
     
    #23
  4. Zeriab

    Zeriab Huggins! Veteran

    Messages:
    1,200
    Likes Received:
    1,256
    First Language:
    English
    Primarily Uses:
    RMXP
    Ruby on Rails has a pretty big impact I bet. Using eval on tainted input is a security disaster waiting to happing. Using the proc creation methods shown here would be just as bad from that point of view.

    Another big reason is that debugging can easily become a nightmare.

    @DoubleX:

    Thank you for the performance data :D

    Have you tried testing the perfomance impact of having multiple different pieces of code.

    What if say 1000 different pieces of codes each are run 1000 times? What if it's 10000 pieces of code run 100 times? What about other distributions?

    *hugs*

     - Zeriab
     
    #24
    Solistra and Tsukihime like this.
  5. DoubleX

    DoubleX Just a nameless weakling Veteran

    Messages:
    1,463
    Likes Received:
    544
    First Language:
    Chinese
    Primarily Uses:
    N/A
    My guess is that the more resource consuming the script code is, the closer the performance between eval, proc/lambda and instance_eval. Also proc and lambda almost have the same time performance in all of my test cases.

    For example:

    block1 = " r = [] 1_000.times { r.push((rand * 1_000).to_i.to_s.to_sym.id2name.to_f / 1_000) } r.sort! { |a, b| a <=> b }"block2 = " r = rand 1_000.times { r = [r] } 1_000.times { r = r[0] }"blocks = [block1, block2]class Test def control1 r = [] 1_000.times { r.push((rand * 1_000).to_i.to_s.to_sym.id2name.to_f / 1_000) } r.sort! { |a, b| a <=> b } end def control2 r = rand 1_000.times { r = [r] } 1_000.times { r = r[0] } end def make_def(method) instance_eval(method) endendtest = Test.newblocks.each_with_index { |block, index| test.make_def(%Q( def new_block#{index + 1} #{block} end ))}#~ # block#~ p("block")#~ 10_000.times {#~ r = []#~ 1_000.times { r.push((rand * 1_000).to_i.to_s.to_sym.id2name.to_f / 1_000) }#~ r.sort! { |a, b| a <=> b }#~ r = rand#~ 1_000.times { r = [r] }#~ 1_000.times { r = r[0] }#~ }#~ p("block")#~ # Roughly 49 seconds#~ # method#~ p("method")#~ 10_000.times {#~ test.control1#~ test.control2#~ }#~ p("method")#~ # Roughly 49 seconds#~ # instance_eval#~ p("instance_eval")#~ 10_000.times {#~ test.new_block1#~ test.new_block2#~ }#~ p("instance_eval")#~ # Roughly 49 seconds#~ # lambda#~ test = eval(#~ "lambda {#~ #{block1}#~ #{block2}#~ }")#~ p("lambda")#~ 10_000.times { test.call }#~ p("lambda")#~ # Roughly 50 seconds#~ # proc#~ test = eval(#~ "proc {#~ #{block1}#~ #{block2}#~ }")#~ p("proc")#~ 10_000.times { test.call }#~ p("proc")#~ # Roughly 50 seconds#~ # eval#~ p("eval")#~ 10_000.times {#~ eval(block1)#~ eval(block2)#~ }#~ p("eval")#~ # Roughly 51 seconds
    All the direct block, direct method call, instance_eval, lambda/proc and eval are run 10,000 times and the time needed are roughly 49, 49, 49, 50, 50 and 51 seconds respectively.

    It suggests that their speeds are almost the same if the script code reaches this level of complexity.

    Another example:

    class Test def control r = [] 10000.times { r.push((rand * 10000).to_i.to_s.to_sym.id2name.to_f / 10000) } r.sort! { |a, b| a <=> b } end def make_def(method) instance_eval(method) endendblock = " r = [] 10000.times { r.push((rand * 10000).to_i.to_s.to_sym.id2name.to_f / 10000) } r.sort! { |a, b| a <=> b }"test = Test.newtest.make_def(%Q( def new_def #{block} end))#~ p("block")#~ 1_000.times { #~ r = []#~ 10000.times { r.push((rand * 10000).to_i.to_s.to_sym.id2name.to_f / 10000) }#~ r.sort! { |a, b| a <=> b }#~ }#~ p("block")#~ # Roughly 56 seconds#~ p("method")#~ 1_000.times { test.control }#~ p("method")#~ # Roughly 56 seconds#~ p("instance_eval")#~ 1_000.times { test.new_def }#~ p("instance_eval")#~ # Roughly 56 seconds#~ test = eval("lambda { #{block} }")#~ p("lambda")#~ 1_000.times { test.call }#~ p("lambda")#~ # Roughly 56 seconds#~ test = eval("proc { #{block} }")#~ p("proc")#~ 1_000.times { test.call }#~ p("proc")#~ # Roughly 56 seconds#~ p("eval")#~ 1_000.times { eval(block) }#~ p("eval")#~ # Roughly 56 seconds
    It's of course an even more unrealistic example but it further suggests that if the script code is complicated enough, all direct block, direct method call, instance_eval, lambda/proc and eval almost have the same speed as they all run 1000 times in roughly 56 seconds.

    It seems to me that the time needed mainly consists of 2 components:

    1. Time needed to call the block

    2. Time needed to execute the block

    According to my informal benchmarks, the speed of direct block, direct method call, instance_eval, lambda/proc and eval almost solely differs from the time needed to call the block. I suspect that it's because they all execute the block in similar ways, but the ways they call the block are quite different from each other(for example, eval needs to parse the script code string into executable script code before executing it). If that's the case, then it can explain why the more complicated the script code is, the closer the speed of the above ways, as most of the time is spent on executing the block itself, rather than calling the block.

    On the contrary, in the 1st benchamrk of my previous reply, the script code is "nil". Obviously the time needed to execute the block is trivial, so almost all the time is spent on calling the block instead. Eval performs the worst here, as it's more than 17 times slower than proc/lambda and more than 30 times slower than instance_eval in that benchmark. Even the much, much more efficient proc/lambda are still slower than instance_eval by 80%.

    My conclusion about lambda/proc vs eval:

    1. Don't use proc/lambda if the script code is rarely executed, use eval in this case instead, as proc/lambda probably consumes more memory than eval.

    2. Don't use eval if the script code is frequently executed and time is more critical than memory, unless the script code reaches the complexity of that in the 1st benchmark in this reply.

    P.S.: Actually I'm learning instance_eval and class_eval, and it seems to me that they are even better than proc/lambda in terms of performance. Creating more methods on the fly may cost more memory, but I still think the speed outweights that drawback. Plus, using instance_eval and class_eval need not worry about saving the methods created on they fly(at least according to my tests), while saving proc/lambda indirectly will be a bit tricky.
     
    Last edited by a moderator: Jan 16, 2015
    #25
    TheoAllen and Tsukihime like this.
  6. Solistra

    Solistra Veteran Veteran

    Messages:
    593
    Likes Received:
    242
    The performance between these different methods of executing code doesn't really get closer -- the code itself simply takes longer to execute, and that obfuscates the actual performance of, say, eval versus instance_exec. Honestly, when testing these things, more performant code shows the actual overhead more clearly. In such cases (which closely relate to how eval is normally used in this community), you can clearly see that the execution speed is very, very different (on my end, eval is about five times slower than Proc#call or instance_exec).


    Also, Proc and lambda objects should have remarkably close performance -- under the hood, they're almost exactly the same object.
     
    #26
    Enelvon and ♥SOURCE♥ like this.
  7. DoubleX

    DoubleX Just a nameless weakling Veteran

    Messages:
    1,463
    Likes Received:
    544
    First Language:
    Chinese
    Primarily Uses:
    N/A
    Are you talking something like what I've posted in my 1st reply of the topic?

    class Test def make_def(method) instance_eval(method) endendblock = "nil"test = Test.newtest.make_def(%Q( def new_def #{block} end))#~ p("instance_eval")#~ 100_000_000.times { test.new_def }#~ p("instance_eval")#~ # Roughly 20 seconds#~ test = eval("lambda { #{block} }")#~ p("lambda")#~ 100_000_000.times { test.call }#~ p("lambda")#~ # Roughly 36 seconds#~ test = eval("proc { #{block} }")#~ p("proc")#~ 100_000_000.times { test.call }#~ p("proc")#~ # Roughly 36 seconds#~ p("eval")#~ 10_000_000.times { eval(block) }#~ p("eval")#~ # Roughly 63 seconds
    Also, in my 2nd reply:

    It seems that I've misused the word performance(which is 100% my fault lol) as I used it to mean the overall time, which is the calling time(including parsing time for eval) + execution time.

    I want to use the benchmarks in my 1st reply and the 2nd benchmark in my 2nd reply to show that eval, proc/lambda, instance_eval, direct method call and direct block execution almost differ solely from the calling time.

    I also want to use the 1st benchmark in my 2nd reply to show what the approximate complexity level of a script code will make them all to almost have the same calling time + execution time.

    If performance solely mean the calling time, then eval is course much, much worse than proc/lambda, which are somehow worse then instance_eval and direct method call, which in turn are noticeably worse than direct block execution.

    Using the script code "nil" as an example again:

    block = "nil"class Test def control nil end def make_def(method) instance_eval(method) endendtest = Test.newtest.make_def(%Q( def new_block #{block} end))#~ # block#~ p("block")#~ 100_000_000.times { nil }#~ p("block")#~ # Roughly 13 seconds#~ # method#~ p("method")#~ 100_000_000.times { test.control }#~ p("method")#~ # Roughly 21 seconds#~ # instance_eval#~ p("instance_eval")#~ 100_000_000.times { test.new_block }#~ p("instance_eval")#~ # Roughly 21 seconds#~ # lambda#~ test = eval("lambda { #{block} }")#~ p("lambda")#~ 100_000_000.times { test.call }#~ p("lambda")#~ # Roughly 36 seconds#~ # proc#~ test = eval("proc { #{block} }")#~ p("proc")#~ 100_000_000.times { test.call }#~ p("proc")#~ # Roughly 37 seconds#~ # eval#~ p("eval")#~ 10_000_000.times { eval(block) }#~ p("eval")#~ # Roughly 62 seconds
    eval is run 10,000,000 times with roughly 62 seconds while proc/lambda, instance_eval, direct method call and direct block execution are run 100,000,000 times with roughly 13, 21, 21, 36 and 37 seconds.

    If evel were run 100,000,000 times, the time spent would be roughly 620 seconds.

    As the calling time of direct block execution should be trivial(if there's any calling at all), I'd assume the execution time of running the script code "nil" 100,000,000 times is roughly 13 seconds.

    If that's the case, then according to the benchmark, the calling time difference will be even more significant, as that of:

    - direct method call and instance_eval is roughly 8 seconds for calling 100,000,000 times(roughly 80ns per call)

    - lambda/proc is roughly 23 seconds for calling 100,000,000 times(roughly 230ns per call)

    -eval is roughly 607 seconds for calling 100,000,000 times(roughly 6070ns per call)

    Note that these numbers are clearly machine dependent. A more powerful machine will likely have a shorter calling time and a less powerful machine will likely have a longer calling time.

    P.S.: As it seems to me that the longer the script code string, the longer the parsing time for eval, so using it with long but fast script code may make its calling time(including the parsing time) even longer compared to proc/lambda, instance_eval, direct method call and direct block execution.
     
    Last edited by a moderator: Jan 16, 2015
    #27
  8. FenixFyreX

    FenixFyreX Fire Deity Veteran

    Messages:
    434
    Likes Received:
    307
    Location:
    A Volcano Somewhere
    First Language:
    English
    Do note that there is an alternative to eval() and an evaluated proc. This is almost functionally equivalent to eval, but faster than both options:

    $some_code = '1 + 1'$proc = eval "proc { #{$some_code} }"eval($some_code)$iseq = RubyVM::InstructionSequence.compile($some_code)$iseq.eval The ips benchmark test yielded this:

    Calculating ------------------------------------- iseq 83.386k i/100ms eval 13.721k i/100ms proc 81.837k i/100ms------------------------------------------------- iseq 3.561M (± 2.2%) i/s - 17.845M eval 171.542k (± 3.9%) i/s - 864.423k proc 3.134M (± 1.8%) i/s - 15.713MComparison: iseq: 3561222.8 i/s proc: 3134043.9 i/s - 1.14x slower eval: 171541.9 i/s - 20.76x slowerIt's a hell of a type, but quite faster if you only have one segment of code needing to be executed, etc.If you really, really need the speed, I'd recommend encrypting the code being saved, and when loading, using InstructionSequence. You can even specify a line number, and file name, etc.

    # RubyVM::InstructionSequence.compile(source, file, path, line, opts)RubyVM::InstructionSequence.compile("1 + 1", "Game_Interpreter", "$RGSS_SCRIPTS", __LINE__)However, I think it uses the top-level binding (scope), not the current one, so beware of that.EDIT: Also, if you begin to compile AND eval the InstructionSequence intermittently throughout your code, it's almost equivalent to eval(), so you must compile it beforehand, and 'eval' it as needed on the fly.
     
    Last edited by a moderator: Jan 17, 2015
    #28
    Evgenij, Zeriab and DoubleX like this.
  9. Lemur

    Lemur Crazed Ruby Hacker Veteran

    Messages:
    106
    Likes Received:
    124
    Location:
    *.*
    First Language:
    English
    If you really want to get the correct performance on the code, you're missing something critical. What happens when that block gets called? It re-evaluates the anonymous function there. Here's how you get around that for better results:

    my_test = -> x { test code }10_000.times(&my_test)Otherwise, again, it's skewed.
    Times doesn't take block args, that's stupid....

    As far as more common usage of lambdas, I tend to do this a lot:

    def my_expensive_method@my_expensive_method ||= -> {#ridiculously expensive code I only need to run once and cache the value of it}.callendNow then, as far as saving serialized code: don't. It's faster to serialize a DSL with JSON and reparse it afterwards. You want to get as far away from eval as possible, because it tends to open up nasty cans of worms real fast.Consider that you serialize symbols that are a list of method names:

    # You want this to run from the database:-> {Character.forwardCharacter.backward}# We notice that these are both on the same object: Character# What do we have left?: forward, backward{character: [:forward, :backward]}# We serialize this to JSON for later, and write a parser:actionable_items = {character: self,menu: MenuItem}def actionable_item_parser(data)data.each { |item, actions|item_hook = actionable_items[item] or raise 'Invalid DSL element!'actions.each { |action| item_hook.send(action) }}endWe've prevented the need for eval, and send is substantially faster. This also lets us lock down what can actually execute, all in one go.Now how much faster, you ask? It's not even fair:

    Code:
    [10] pry(main)> me=> #<struct Person name="brandon", age=24>[11] pry(main)> send_test = -> me { -> { me.send(:name) } }.call(me)=> #<Proc:0x007ff59b2cc228@(pry):40 (lambda)>[12] pry(main)> eval_test = -> me { -> { eval 'me.name' } }.call(me)=> #<Proc:0x007ff59b43ee80@(pry):38 (lambda)>[13] pry(main)> Benchmark.measure { 100_000.times { eval_test.call } }.real=> 0.558608[14] pry(main)> Benchmark.measure { 100_000.times { send_test.call } }.real=> 0.017196[15] pry(main)> st = Benchmark.measure { 1_000_000.times { send_test.call } }.real=> 0.158144[16] pry(main)> et = Benchmark.measure { 1_000_000.times { eval_test.call } }.real=> 5.007241[17] pry(main)> "send takes #{(st / et) * 100}% the runtime of eval. Ouch"=> "send takes 3.1583061410465367% the runtime of eval. Ouch"[18] pry(main)> "eval is #{(et / st) * 100}% slower than send. Ouch"=> "eval is 3166.254173411574% slower than send. Ouch"
     
    Last edited by a moderator: Jan 17, 2015
    #29
    Evgenij, FenixFyreX, Zeriab and 2 others like this.
  10. WorldWakeSTU

    WorldWakeSTU Veteran Veteran

    Messages:
    95
    Likes Received:
    6
    im not looking to open a thread but am trying to figure out how i should format some formal data sets
     like datatype: stat(extras), stat2(extras), stat3(extras) i wanted to make them into a module since these would never change but couldnt figure out the format
     
    #30
  11. Andar

    Andar Veteran Veteran

    Messages:
    28,675
    Likes Received:
    6,594
    Location:
    Germany
    First Language:
    German
    Primarily Uses:
    RMMV
    WorldWakesSTU, please refrain from necro-posting in a thread. Necro-posting is posting in a thread that has not had posting activity in over 30 days. You can review our forum rules here. Thank you.



    Our Forum rules require you to open a new topic for a new problem you have, because too often similiar problems require different solutions and we don't want people to be confused by mixing several problems into one topic.


    Closing this
     
    #31
Thread Status:
Not open for further replies.

Share This Page